Wednesday, August 20, 2025

Canadian universities are adopting AI tools, but concerns about the technology remain

By The Canadian Press
 August 19, 2025 

People walk through McGill University's campus in Montreal on Wednesday, August 6, 2025. THE CANADIAN PRESS/Christopher Katsarov

Canadian universities are embracing generative artificial intelligence in their teaching plans as more students and instructors opt to use the rapidly evolving technology.

Several large institutions, including McGill University, University of Toronto and York University, said they are adopting certain AI tools because they can enhance learning. Those include tested tools that help students summarize academic research or assist professors in course planning.

The shift comes as post-secondary students’ AI use continues to grow. A survey conducted in late 2024 by the online learning platform Studiosity found that 78 per cent of Canadian students used AI to study or complete their school work.

The Pan-Canadian Report on Digital Learning also found that the number of educators who reported generative AI use in student learning activities was 41 per cent last year, up from 12 per cent in 2023.

McGill University’s associate provost, Christopher Buddle, said the school has integrated digital AI assistant Microsoft Copilot into its systems to help staff, students and faculty with their work. The tool can be used to make a first draft of a letter, summarize online content or to organize day-to-day tasks.


“People use it for all kind of things and from what I understand it’s being used effectively and used quite a lot by our university community,” he said.

Buddle said offering generative AI tools through the school’s IT infrastructure ensures they are vetted properly to address privacy risks and ensure data protection.

“We’ve not approached it through the idea of banning (AI) or saying ‘no.’ In fact, what we’d rather see and what we support instructors doing and students doing is effective use of generative AI in teaching and learning,” he said.

Buddle said the university has left it up to instructors to decide how much AI use they want to allow in their classes.

“We don’t tell instructors what to do or not to do. We provide them tools and give them the principles and let them make the best decisions for their course because it’s so discipline specific,” he said.

Some professors, for example, have their students use generative AI to create a first draft of a written assignment and then the students evaluate the outcome, Buddle said.

The school is launching an online module for students and instructors this fall to help them navigate and understand the benefits and risks of AI in education, he added.

“Generative AI is pervasive. It’s everywhere and it will remain that way going forward,” Buddle said.

University of Toronto professor Susan McCahan, who led the school’s task force on AI, said the institution is integrating AI tools but it’s also taking a balanced approached that allows instructors to explore the technology while critically thinking about its value in education.

“We have a wide range of opinions on AI and the use of AI in classrooms and in teaching and in learning,” she said. “And we want to support faculty who are interested in innovating and using it in their classes. We want to support faculty who find that it is not useful for them or for their students.”

McCahan said the university has used AI systems for years, including for auditing financial reports and helping students find mental health resources. More recently, the school also made Microsoft Copilot available to all faculty, students and staff.


“They can use in any way they wish. And because it’s within our system, you can do things like open a library article in the library, and ask Copilot to summarize it,” she said. “It doesn’t share that data back with Microsoft ... so you can put in more sensitive information into that.”

McCahan said the university has also made ChatGPT Edu licences available to students and staff who would like to use the tool with added security protection. The school has been experimenting with AI tutors and will expand that in the coming school year with Cogniti, an open-source system developed at the University of Sydney in Australia, she added.

At York University, the goal is “to take a thoughtful and principled approach to this modern technology,” deputy spokesperson Yanni Dagonas said.

“Transparency works to demystify AI, helping our community better understand its impact and potential,” Dagonas said.

The university has created an online AI hub with a dedicated section for instructors, who are discouraged from using AI detection tools when evaluating students’ work because many such tools are considered unreliable and raise concerns about data security and confidentiality.

Despite the “huge uptake” in students’ generative AI use, many professors are still worried about bias in AI models, ethical and privacy issues, as well as the technology’s environmental impact, said Mohammed Estaiteyeh, an assistant professor of education at Brock University.

“Students are kind of using (AI) to save time. They think it is more efficient for various reasons,” he said.

But when it comes to instructors, “it depends on your domain. It depends your technological expertise. It depends on your stance towards those technologies,” he said.

“Many instructors have concerns.”

Estaiteyeh said most Canadian universities are providing guidance to instructors on the use of AI in their classes but leaving much of it to their discretion.

“For example, (at) Brock, we don’t have very strict guidelines in terms of students can do this or that. It’s up to the instructor to decide in relation to the course, in relation to the materials, if they want to allow it or not,” he said.

“We are still navigating the consequences, we’re still not 100 per cent sure about the benefits and the risks. A blanket, a one-size-fits-all approach may not suit well.”

Estaiteyeh said instructors and students need AI training and resources on top of guidance to reduce the risk of relying too much on the technology.

“If you offload all the skills to the AI tools then you’re not really acquiring significant skills throughout your three- or four-year degree at the university,” he said.

“Those tools have been in place for around two years only. And it’s too early for us to claim that students have already grasped or acquired the skills on how to use them.”

The Canadian Alliance of Student Associations said AI technologies must complement the learning experience and universities should discourage the use of AI for evaluations and screening of student work.

The alliance said in a report released earlier this year that research has shown untested AI systems can introduce “bias and discriminatory practices” against certain student groups.

“For instance, AI-powered plagiarism detection tools have been found to disproportionately misclassify the work of non-native English speakers as AI-generated or plagiarized,” the report said.

The alliance has been calling for “clear ethical and regulatory guidelines” governing the use of generative AI in post-secondary education.

This report by The Canadian Press was first published Aug. 19, 2025.

Maan Alhmidi, The Canadian Press


AI company revolutionizes energy consumption and management


By Joshua Santos
 August 14, 2025 

Emerging developments in artificial intelligence (AI) have prompted a technology company to launch a subsidiary focused on optimizing energy operations for commercial and residential buildings to become more sustainable.

Trane Technologies launched Montreal-based BrainBox AI Lab to find solutions to reduce energy consumption by analyzing data from heating, ventilation and air conditioning systems to understand and then predict how a building will operate throughout the day.

“It’s a bit like the movie Back to the Future‚” Jean-Simon Venne, BrainBox AI’s president and founder, told BNN Bloomberg in a Wednesday interview. “We have the capability to go in the future and see what’s not working, and then we’re coming back in the present, and we’re changing the present to build a better future. So that’s how we use AI.”

The AI predicts building temperatures and energy usage, enabling real-time optimization to reduce energy consumption. The AI system acts as a virtual engineer, enhancing productivity by predicting and solving equipment issues before they arise.Latest updates on company news here

“What we’re doing is we’re taking that data and we’re using the capability of AI to give us the prediction of what will be happening in your building over the next few hours,” said Venne. “Having this prediction we then know exactly (when) it’s going to be a bit too hot, a bit too cold, and how much energy you’re going to be spending to maintain the desired temperature in your building. That prediction is then used to optimize the building in real time. We could move from reactive control to a preemptive control and shave up to 25 per cent of energy saving and generate a lot of emission reduction.”

A multidisciplinary team of technical experts, including software engineers, data scientists, AI researchers, machine learning developers and AI engineers will continue to advance autonomous control systems, predictive models, and algorithms aimed at reducing emissions through smarter energy use.

A room or studio for example can heat up from either the heat from sun rays beaming through windows or equipment, like cameras or laundry machines left on for an extended period during peak times. The AI will be able to gather data from routine scenarios for when a room was too hot before to find efficiencies to keep the room at a cool temperature in the future by lowering energy consumption. That can help reduce greenhouse gas emissions (GHG) produced from buildings.Latest updates on investing here

“When you’re trying to save emissions, you want to basically, of course, save that kilowatt or the quantity of cubic or metre of gas that you’re consuming in any given time,” said Venne. “You also want to know how is the kilowatt manufactured? Is the kilowatt that you’re consuming right now in your building, coming from windmill or a coal power generation plant? That information is computed in the AI, and we basically know when we should save that kilowatt.”

“Of course, the AI is making sure that we’re saving it at a time where the electron and the kilowatt are not so green. To say like dirty instead of being green. We optimize the money that that kilowatt is costing you by shaving some and we’re also making sure to do it at the time of day where the kilowatt is dirty instead of green. So, we’re having that double effect. So we’re winning on both fronts.”

Canada has over 15 million residential buildings and over 480,000 commercial and institutional buildings, including offices, retail and warehouses, according to a report from Environment and Natural Resources Canada. Canada’s homes and buildings account for 13 per cent of GHG emissions, due to the combustion of fossil fuels for space and water heating.

Electricity use for cooling, lighting and appliances brings the total to 18 per cent. The buildings sector includes varied businesses, many of which are small and medium sized enterprises including home and building construction, high-efficiency equipment and appliance manufacturing, sales and installation, and management of energy use.


Joshua Santos

Journalist, BNNBloomberg.ca



Lawyer apologizes for AI-generated errors in murder case

By The Associated Press
 August 15, 2025 

People leave the Supreme Court of Victoria in Melbourne, on Friday, Aug. 15, 2025. (AP Photo/Rod McGuirk)

MELBOURNE, Australia — A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence.

The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world.

Defence lawyer Rishi Nathwani, who holds the prestigious legal title of King’s Counsel, took “full responsibility” for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday.

“We are deeply sorry and embarrassed for what occurred,” Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team.

The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani’s client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment.

“At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,” Elliott told lawyers on Thursday.

“The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice,” Elliott added.

The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court.

The errors were discovered by Elliott’s associates, who couldn’t find the cases and requested that defence lawyers provide copies.

The lawyers admitted the citations “do not exist” and that the submission contained “fictitious quotes,” court documents say.

The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct.

The submissions were also sent to prosecutor Daniel Porceddu, who didn’t check their accuracy.

The judge noted that the Supreme Court released guidelines last year for how lawyers use AI.

“It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,” Elliott said.

The court documents do not identify the generative artificial intelligence system used by the lawyers.

In a comparable case in the United States in 2023, a federal judge imposed US$5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim.


Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won’t again let artificial intelligence tools prompt them to produce fake legal history in their arguments.

Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn’t realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.

British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the “most egregious cases,” perverting the course of justice, which carries a maximum sentence of life in prison.

Rod Mcguirk, The Associated Press

No comments: