Artificial intelligence, including ChatGPT and other generative AI, is “here, but it’s not being regulated and we don’t actually know all of the AI tools that are out there and being used,” Yukon Supreme Court Chief Justice Suzanne Duncan told Law360 Canada.
“The main focus of our concern,” the chief justice explained in an exclusive interview, “is legal research and legal submissions, and for reasons of transparency, we want there to be awareness of all parties involved in [a] case — including the court — that AI has been used and for what purpose, specifically in those areas, and then that creates the ability to assess the reliability and accuracy of those tools that are being used.”
Yukon Supreme Court Chief Justice Suzanne Duncan
The Yukon Supreme Court also anticipates that self-represented litigants will be using ChatGPT or other AI to help them craft their court submissions, she said.
(Generative AI refers to chatbot ChatGPT, writing assistant Grammarly and other algorithms used to create new content, including text, audio, code, images, simulations and videos.)
Considerations of accuracy and reliability will inform the Yukon Supreme Court’s scrutiny of AI in the courtroom, Chief Justice Duncan said.
“Certainly, our intention is not to focus on legitimate use of artificial intelligence tools, like Grammarly [or] spell check,” she explained. “Our focus is legal research, legal submissions, transparency of those areas, so that there’s fairness, and also preservation of the law ... and the proper development of the law, so that the rule of law is preserved, and we don’t have the law going off in directions that are incorrect, wrong, improper because of reliance on tools that have been used that are not accurate.”
In her two-paragraph June 26 practice direction, Chief Justice Duncan wrote, in part, that there are legitimate concerns about the reliability and accuracy of information generated through the use of artificial intelligence. Therefore, “if any counsel or party relies on artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter in any form before the court, they must advise the court of the tool used and for what purpose.”
What should lawyers and litigants think about in order to comply with the practice direction?
“Well, for example, if they’ve used ChatGPT or something like ChatGPT in developing their legal submissions to the court, or in conducting their research, all the practice direction is doing is asking them to disclose that to the court — and then it will be up to the judge to decide where to go from there and how to deal with it,” Chief Justice Duncan advised. The judge “may have more questions about it. They may want to probe more. Or ... if it’s an AI tool that they’re familiar with or that is something like Grammarly, they may have no further questions,” the chief justice remarked. “And if it’s something that the judge ... is comfortable with, or the judge feels like it’s reliable, it’s accurate [and] they have no concerns, then ... that will be the end of it.”
The chief justice confirmed that the Yukon superior trial bench intends “at this stage” that any disclosure of AI use in proceedings will be scrutinized and dealt with, as needed, by the individual judges rather than by court officials.
“We just want to make sure that the information is provided so that the other side, whether they’re a lawyer or self-represented party, is aware that this is a tool that has been used ... and then submissions can be made on it, if necessary, and then the judge can decide further how to deal with it,” she explained. “And it may need no submissions — it just is information for the judge and then the judge decides how they’re ... going to handle it.”
Asked whether a judge will be able to reject a submission that was based on AI, the chief justice said “we can’t be that prescriptive at this stage. ... The process would be to provide the information and then the judge decides how to proceed with it from there.”
Could a court impose a sanction, or award a remedy, if it learns that a lawyer has submitted false AI-generated information, without being duly diligent as to its veracity? “That’s something that we haven’t grappled with yet,” the chief justice replied. “It’s early days, and it would really depend on the facts.”
Chief Justice Duncan said her court’s practice direction was not prompted by any particular cases in Yukon, but is rather “anticipatory.”
“It’s a very flexible tool,” she emphasized. “It was worded, deliberately, very generally because of the unknown and the uncertainty [around AI use in court]. It’s also something that can be amended very easily and that can evolve as we learn more.”
Asked where the court gets its authority/jurisdiction to compel disclosure of AI usage and disclosure of how lawyers and litigants are using AI in preparing their legal research and submissions — a groundbreaking development — Chief Justice Duncan said courts have the inherent power to be a gatekeeper and control their own process.
“I think it’s the jurisdiction that applies to any of the rules [of court] and the practice directions that provide for ... fairness, transparency for all the parties [and] for the administration of justice, and to ensure that the ... rule of law is preserved and that the law develops in an appropriate way,” she explained.
“It’s the integrity of the administration of justice really that we’re concerned about and making sure that the decisions that we write are not prone to being overturned or not corrupted ... because of inaccurate or false information,” the chief justice elaborated. “I recognize that it may be a controversial step in some people’s minds, but it’s a way to ensure ... transparency and provision of information to preserve the integrity of the [justice] system.”
The oversight or regulation of AI use in legal proceedings — a cutting-edge legal issue — is not simply a matter for courts to determine, she noted. “I think it’s going to have to be a much larger discussion than just a couple of practice directions ... [it’s] really a way to start the discussion,” she said, alluding to a similar June 23, 2023, AI disclosure directive from the Manitoba Court of King’s Bench which stipulates that “when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.”
The Supreme Court of Canada is also considering whether to issue a practice direction to the bar and litigants with respect to AI use in Supreme Court cases, as well what internal policies the top court should create around AI use by judges and staff.
“We want everybody to be aware” that generative AI has arrived in court, Chief Justice Duncan said of her court’s novel move. “Really that’s all these practice directions are saying now,” she suggested. “And how we deal with this is going to evolve, and it’s going to be a combination of the courts, maybe the law societies if they decide [to weigh in, and] lawyers. There will have to be a lot more discussion as we learn more,” she advised. “But at this point, because we know [AI] is out there, we need to at least have it out on the table, on the record.”
As for how her court will handle disclosure and internal guidance with respect to the use of generative AI by judges and court staff, Chief Justice Duncan said the Yukon Supreme Court’s judges have not yet “ventured into” using AI.
“It hasn’t become an issue for us, but clearly ... as we learn more, and if there is an appetite [within the court] to use it,” she said the judges will need to look at internal rules and guidance, including disclosure. “It’s the same issue,” Chief Justice Duncan observed. “It’s preserving the integrity of the [justice] system.”
If you have any information, story ideas or news tips for Law360 Canada, please contact Cristin Schmitz at cristin.schmitz@lexisnexis.ca or call 613-820-2794.