“This is indeed a very important — and emerging — topic,” Stéphanie Bachand, the top court’s executive legal officer and chief of staff to Chief Justice of Canada Richard Wagner said, in response to questions from Law360 Canada. “The Supreme Court of Canada is currently considering the matter, in order to decide whether to move forward with a directive or policy on the use of AI.”
Bachand confirmed that among the issues the Supreme Court of Canada is considering is what policy might apply internally to govern AI use by the apex court’s judges and staff.
The first known Canadian courts to publicly address AI use by litigants and the bar were the Manitoba Court of King’s Bench, which broke new ground nationally by issuing a directive June 23 mandating disclosure to the court of AI use in court materials, followed three days later by the Yukon Supreme Court, which issued a somewhat more broadly worded “practice direction” titled “Use of Artificial Intelligence Tools.”
These cutting-edge legal developments could be a harbinger of an emerging trend in Canada toward judicial regulation of AI use in court submissions by counsel and litigants.
The move is all the more significant for the bar since law societies have yet to issue specific guidance or express rules about lawyers’ use in court of AI-assisted submissions and documents — arguably, a professional regulatory vacuum, if not a gap, for law societies to fill.
“Artificial intelligence is rapidly evolving,” Yukon Supreme Court Justice Suzanne Duncan wrote in a two-paragraph practice direction June 26. “Cases in other jurisdictions have arisen where it has been used for legal research or submissions in court,” she said. “There are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence. As a result if any counsel or party relies on artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter in any form before the court, they must advise the court of the tool used and for what purpose.”
In a one-paragraph directive, titled “Use of artificial intelligence in court submissions,” Manitoba Court of King’s Bench Chief Justice Glenn Joyal wrote, in part: “While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence. To address these concerns, when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.”
Litigator Eugene Meehan of Ottawa’s Supreme Advocacy LLP, a leading Supreme Court of Canada agent and the top court’s former executive legal officer, questioned whether courts should issue directives at this time related to litigants’ and lawyers’ use of AI in preparing court submissions. He also called law societies’ silence on the issue “troubling.”
Eugene Meehan, Supreme Advocacy LLP
“These AI practice directives appear premature and overly general to be of great assistance in terms of actually providing direction to practitioners in the field/metaverse,” he suggested. “Some degree of AI or machine-learning may already be present in a range of commonly used tools, and it is unclear when and what needs to be disclosed.”
By way of example, Meehan said some Supreme Advocacy LLP lawyers use Grammarly, “a spellcheck on steroids. Grammarly employs AI to improve the tool’s suggestions on making one’s writing more effective. Do we have to advise the court that this type of AI tool was used? Spellcheck, that’s AI too.”
“Perhaps these practice directives can be interpreted as a call to push ‘pause,’ and ensure that Canadian society and Canadian jurists reach an appropriate consensus about the reliability and regulation of AI in the justice system?” he asked. “Until the directives on AI are more clear, is there some risk they cause confusion and do more harm than good?”
Meehan speculated whether the initial AI directives “are a reaction to the stories out of the U.S. of lawyers relying too heavily on ChatGPT for research, to the point where they use cases that do not exist? If that is the particular mischief that is being targeted here, the directives on AI can say so.”
Two lawyers faced a sanctions hearing before a New York judge last month, after the court discovered half-a-dozen “bogus,” i.e. non-existent, precedents the law firm had cited in a personal injury case, based on legal research using the ChatGPT.
The litigator with 30 years at the bar, who was responsible for the legal research, told the court he didn’t know that ChatGPT could produce fake content, and pledged not to use AI for legal research in future without verifying its authenticity.
If you have any information, story ideas or news tips for Law360 Canada, please contact Cristin Schmitz at cristin.schmitz@lexisnexis.ca or call 613-820-2794.