Law360 Canada asked University of Ottawa law professor Amy Salyzyn, board chair of the Canadian Association for Legal Ethics, and University of Waterloo computer science professor Maura Grossman, a Buffalo, N.Y., attorney who teaches “Legal Values and Artificial Intelligence” at Osgoode Hall Law School, to reflect on Canadian courts’ initial efforts to oversee lawyers’ and litigants’ reliance in legal proceedings on AI, following the Nov. 30, 2022, public debut of OpenAI’s chatbot, “Chat Generative PreTrained Transformer.
Spurred by a high-profile American case last month where a federal judge fined two New York attorneys for filing, without verifying, a ChatGPT-generated legal brief that cited non-existent precedents, as well as a general concern about fake law corrupting the Canadian justice system, the chief justices of the Manitoba Court of King’s Bench and the Yukon Supreme Court separately issued groundbreaking practice directions in June that require parties and counsel to disclose to the trial courts their use or reliance on AI in court proceedings or filings, as well as to explain to the bench for what purpose, or how, such AI was used.
Meanwhile, the Supreme Court of Canada is mulling whether to issue a practice direction for AI use in its proceedings, while the Canadian Judicial Council is developing guidelines “in the next few months” to assist all the country’s superior trial and appellate courts to deal with the thorny issue of generative AI (GenAI) use in their courtrooms, Law360 Canada has confirmed.
Amy Salyzyn, University of Ottawa
But Salyzyn also cautioned against courts venturing further down the path of issuing generalized disclosure directives requiring lawyers and litigants to reveal whether and how they used AI in preparing court submissions. “It certainly makes sense they’re going to have some concerns given some high-profile examples of fake cases being presented to the courts in the U.S.,” she acknowledged. “But from my perspective, I’m not sure actually that this particular approach is best or productive, and I do have some concerns with it.”
Salyzyn said the two known court practice directions are vague about the types of AI tools captured and about what must be disclosed, as well as being overbroad in requiring disclosure to the court of reliance on “artificial intelligence” and how it was used.
“Artificial intelligence” is a term of art, Salyzyn explained, so people don’t always agree what falls under that umbrella. A threshold question emerges: “What types of technologies is the practice direction capturing?”
Salyzyn noted that under some commonly accepted definitions of AI, many regularly used legal research databases would be included, as would programs to assist in writing and correcting grammar. Given the nature and timing of the two practice directions from the trial courts, they appear to be concerned about ChatGPT, but the practice directions aren’t restricted to this, she said.
Salyzyn said it is likely that generative AI employing large-language models (like OpenAI’s ChatGPT and Google’s Bard) will eventually be integrated into mainstream tools such as word processing and email. “So it does seem that this practice direction is kind of vague and overinclusive, capturing a lot of things that may not be the things the courts [are] actually concerned with.”
Salyzyn questioned what the Manitoba and Yukon courts had in mind by requiring lawyers and litigants to reveal how they used a certain AI tool. Does this mean they must explain how the tool works and what it did? she asked.
“Even more fundamentally, it’s not clear to me what problem this is trying to get at, so if the concern is made-up cases ... why not simply go for the court just checking any citations that submissions rely on?”
Salyzyn said courts have always had to vet external inputs in order to guard against errors or misstatements by the parties and counsel. “With or without this practice directive, a court is always going to have an obligation of due diligence to make sure the cases that it integrates into its judgments are accurate and accurately presented.”
She also questioned why a court should be delving into who or what contributed to the language, authorities or graphics used in court submissions. “I think there’s a question of overreach in the sense that, on what theory is it the courts’ business to inquire about how the language of a submission was generated?’”
Salyzyn acknowledged that courts are likely concerned about the prospect of being flooded with AI-generated material from self-represented litigants. But fundamentally, that concern and the type of problem isn’t new, she remarked. “Courts have, for a long time, had to grapple with a wide variety and quality of submissions from self-represented litigants,” including from vexatious litigants and parties who make pseudo-legal court submissions. “It’s a real challenge, and one that will continue, and this may be new variation on it,” she said. “The courts will have to manage. Hopefully, we do come to a point where we get even higher quality tools that can minimize some of these downsides and can really provide a benefit to self-represented litigants in making submissions that are helpful to them and to the court.”
Salyzyn advised that Canadian courts should continue to educate themselves about AI concepts and tools, as well as keep “a watching brief” on what concerns emerge and what best practices courts are following.
She said law societies can also help by educating the bar on AI — including possibly by adding express AI-directed commentaries to the rules of professional conduct, elaborating, for example, on lawyers’ professional obligation to be technologically competent in serving their clients. Guidance on best practices that lawyers can follow would also assist.
Maura Grossman, University of Waterloo
“We know generative AI is prone to ‘hallucinations,’” she remarked. “Well maybe the court is entitled to be warned,” Grossman observed. “I think the courts are trying to protect themselves because they can’t be expected to check every single citation of every ... case that’s submitted to them in a brief.”
On the other hand, very generally-worded court disclosure directives on AI use, that are vague and overbroad, could discourage people from using AI tools that would be helpful to them.
By way of example, Grammarly may be used by non-English speakers or native English speakers whose grammar is imperfect, she said. “Well that’s AI — whenever you regulate you have to think about the unintended consequences.”
As well will court disclosure directives on AI use “now stop people from using these [AI] tools, because they don’t want to have to disclose what might be considered confidential information by some?”
The key, Grossman suggested, “is coming up with the right balance of not preventing people who can’t afford a lawyer from getting access to court but also avoiding the problem that happened in the United States where a lawyer used the generative AI to generate a brief that included six or seven cases that didn’t exist.”
She pointed out that lawyers are already obligated to be duly diligent in ensuring that any court documents they sign are accurate.
It would also be “perfectly reasonable” for courts to issue practice notices underscoring that counsel and litigants are obliged to verify the accuracy of their court submissions, including those which rely on GenAI — rather than issuing AI disclosure directives, she suggested.
Since lawyers are obliged to practise competently, which includes keeping abreast of relevant technological developments, Grossman said a Canadian lawyer who misuses AI in court could find themselves not only subject to sanctions from the bench, but also facing discipline from their law society.
“So I think you’re going to see more movement towards [AI] regulation over time, but I think in this particular legal area, it would mostly be through the courts, or more likely, through the professional organizations.”
Grossman said what keeps her up at night about AI use in court is not that some litigants or counsel will negligently rely on AI-generated court submissions containing non-existent citations, overruled precedents or other legal misstatements.
“Mistakes happen so mistakes are going to going to happen here,” she observed. “But for me the bigger-picture ... question is: how are courts going to manage evidence?”
Human judges and juries rely on their eyes, ears and other senses to assess evidence, but “we are quickly moving into a world where we’re no longer going to be able to do that,” she said.
Citing a famous AI-generated photo of the Catholic pope in a fashionable white puffer jacket, by way of example, she said “we don’t really know whether that’s him or not.”
AI manipulation of voice clips can also make people seem to be saying something they never said.
“That’s the world we’re moving into with deep fakes and this GenAI, where we’re not going to be able to tell what’s real and what’s not real,” Grossman warned. “It poses the problem of fake evidence, but it also poses the problem [that] everything becomes suspect — even when it’s real.”
Grossman and co-authors, including a recently retired U.S. District Court judge, have written a scholarly article, to be published next October in the Duke Law and Technology Review, titled “The GPTJudge: Justice in a Generative AI World.”
Their wide-ranging paper explores evidentiary issues that the bench and bar must address to determine whether actual or deepfake GenAI output should be admitted into evidence in civil and criminal trials, including offering “practical step-by-step recommendations” for courts and lawyers to follow in meeting new evidentiary issues, such as challenging evidence that is AI-generated or suspected to be AI-generated.
Grossman recommends that in the era of GenAI and deep fakes, courts may need to revisit the current rather low standard for admitting evidence (while leaving it to the judge or jury to determine evidentiary weight) — particularly in criminal cases where a person’s liberty is at stake.
“I think we have to be very, very careful,” Grossman advised. “We need to have a stronger gatekeeping role. Because once somebody sees [something] or hears it, it’s very compelling. ... The audio and the video can have a tremendous impact on people’s perception and memory. Even if you say, ‘Be careful,’ people see it, and they say, ‘how could this not be real? I’m seeing it.’”
Grossman’s article “The GPTJudge” also: examines the impact on substantive IP law of machine-generated content; raises concerns about possible skyrocketing litigation costs as parties have to hire forensic experts to address AI-generated evidence; and asks whether GenAI “will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise.”
“There is no doubt” that GenAI will “revolutionize many fields, not the least of which will be the legal and justice systems,” Grossman and her co-authors predict. “Generating fake but believable text, audio and video of ordinary people spouting lies, misinformation, or defamatory content, committing crimes, or breaking the law will become feasible for just about any person with a working computer,” they write. “Judges themselves will have to sort through AI-generated pleadings and arguments, including perhaps even using an AI clerk to filter out or respond to junk claims or imaginary citations (if and when this becomes possible.”)
In addition, “judges may eventually join the revolution, using new GenAI systems to help them decide their cases or draft their opinions more effectively and efficiently, after problems of inaccuracy and bias are resolved.”
They speculate that judges may even be replaced by AI some day.
Photo of Amy Salyzyn: Courtesy University of Ottawa Faculty of Law-Common Law Section
If you have any information, story ideas or news tips for Law360 Canada, please contact Cristin Schmitz at cristin.schmitz@lexisnexis.ca or call 613-820-2794.