The chief justice, a former Bay Street competition lawyer who assumed the court’s leadership 12 years ago, has identified the Federal Court’s two main challenges as improving access to justice and modernizing the court to keep pace with technological change.
To those ends, on Dec. 20, 2023 his court became the first in Canada (to the best of our knowledge) to publicly disclose its still-developing policy for handling internal court use of generative AI, thus shedding light on an issue about which the judicial branch has been mostly silent since ChatGPT’s public emergence Nov. 30, 2022.
Notably, in doing so, the Federal Court ruled out for now using AI to help its judges decide cases, but left the door open in future, following public consultation.
Federal Court Chief Justice Paul Crampton
The notice requires parties and counsel to alert the court and each other, via a signed declaration, when an AI-generated or -created document is filed with the court.
In 2023, at least nine Canadian courts have issued directives and cautionary guidance to counsel and litigants about usage of AI in court proceedings. Recent directives have adopted some or all of the same guiding principles using near-identical language (see below) and half of the courts — the Federal Court, the Provincial Court of Nova Scotia, the Yukon Supreme Court and the Manitoba Court of King’s Bench — mandate disclosure to the court of AI usage.
The Federal Court’s companion publications last month are the latest — and appear to be the most extensive — public commentary from a Canadian court on the use of AI in court proceedings.
With respect to the Federal Court’s own internal AI usage, the Dec. 20 document titled “Interim principles and guidelines on the court’s use of artificial intelligence” reveals that the Ottawa-based itinerant court is investigating and will proceed to “pilot” what it says are “potential uses of AI for internal administrative purposes,” including a planned pilot to use AI to translate the court’s decisions, under the supervision of humans who are language specialists.
The court pledges to follow seven “Principles and Guidelines” to guide the potential use of AI by Federal Court judges and their law clerks, including “accountability,” such that “the court will be fully accountable to the public for any potential use of AI in its decision-making function.”
The other six “principles” address: respect for fundamental rights; non-discrimination; accuracy; transparency; cybersecurity; and “human in the loop,” i.e. ensuring that judges and law clerks “are aware of the need to verify the results of any AI-generated outputs that they may be inclined to use in their work.”
With respect to the “potential use of AI” by judges and their law clerks, the court pledges it “will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultation. This includes the court’s determination of issues raised by the parties, as reflected in its reasons for judgment and its reasons for order, or any other decision made by the court in a proceeding.”
The Federal Court acknowledges known risks of AI, including confabulation, “deep fakes,” and possible bias in AI programs, underlying algorithms and data sets. It also acknowledges potential negative impacts on judicial independence and public confidence in the administration of justice from some uses of AI, including automated or AI-assisted adjudication.
The Federal Court commits to “exercise the utmost vigilance to ensure that any use of AI by the court does not encroach upon its decision-making function.”
The court also recognizes that “AI can improve the efficiency and fairness of the legal system,” for instance by assisting with such tasks as “analyzing large amounts of raw data, aiding in legal research and performing administrative tasks. This can save time and reduce workload for judges and court staff, just as it can for lawyers.”
Potential benefits for all stakeholders in the justice system also include streamlining aspects of case management, improving the accuracy and thoroughness of legal research, helping self-represented litigants navigate court procedures and supporting alternative dispute resolution, the court says.
The Federal Court emphasizes it “will continue to consult experts and stakeholders as its understanding of AI evolves.”
(In that regard, the court appears to have revised a more sweeping previous draft “practice direction on AI guidance” after feedback, including from the Canadian Bar Association last November which said that: the Federal Court’s draft definition of AI was overbroad; mandating disclosure of AI usage as well as how it is used, creates privilege concerns and risks, in addition, that such mandated disclosure will negatively impact the court’s scrutiny and treatment of a party’s submissions; and requiring a submitting party to certify their verification of all AI-assisted text and citations “is unnecessary,” given that counsel have professional obligations to use tactics that are legal, honest and respectful of the court. The CBA also urged wider and longer consultation.
However, the office of Chief Justice Crampton said in a statement to Law360 Canada that the Federal Court's notice to the public and the profession on the use of AI in court proceedings “reflects significant work by the court’s technology committee over the past two years, as well as feedback received during a consultation process with experts and stakeholders. The draft notice published in a [Nov. 24, 2023] letter ... by the CBA is outdated and does not reflect the court’s approach to artificial intelligence.”)
The Federal Court’s Dec. 20, 2023 notice to the parties and profession states that they are expected to inform the court and each other “if they have used AI to create or generate new content in preparing a document filed with the court” and stipulates that disclosure, via a “declaration,” must be made in the first paragraph of a document’s text.
This could be a statement, for example, that “AI was used to generate content in this document.”
The court said the disclosure requirement applies to “all documents” that are submitted to the court and prepared for the purpose of litigation (not including certified tribunal records submitted by third-party decision-makers), including memoranda of fact and law and written representations.
However, a declaration is required only for “certain forms of AI, defined as a computer system capable of generating new content and independently creating or generating information or documents, usually based on prompts or information provided to the system.”
The Federal Court’s notice says a declaration is not required for AI “that only follows pre-set instructions, including programs such as system automation, voice recognition or document editing.”
The court acknowledges that, as officers of the court, counsel are obliged to be duly diligent with respect to the materials they file in court. But to “ensure fair treatment” of both represented and self-represented litigants (the latter don’t have the same due diligence obligations as lawyers), the court says the notice on AI-related responsibilities applies to parties and counsel.
The court notes that emerging technologies often bring both opportunities and challenges. “Significant concerns have recently been raised regarding the use of AI in court proceedings, including in relation to ‘deepfakes,’ the potential fabrication of legal authorities through AI, and the use of generative decision-making tools by government officials.”
Human verification is necessary, the court says. “To ensure accuracy and trustworthiness, it is essential to check documents and material generated by AI,” the court instructs. Moreover, “when referring to jurisprudence, statutes, policies or commentaries in documents submitted to the court, it is crucial to use only well-recognized and reliable sources,” including “official court websites, commonly referenced commercial publishers, or trusted public services such as CanLII.”
In 2023, other Canadian courts also issued notices and directions about AI usage in their proceedings:
- The Nova Scotia Supreme Court does not mandate disclosure to the court of AI use but “urges practitioners and litigants to exercise caution when referencing legal authorities or analysis derived from generative AI in their submissions” and expects them to verify submitted material by checking with reliable legal databases.
- The Manitoba Court of King’s Bench, on June 23, 2023, issued the first Canadian practice direction on the use of AI in court submissions, stating that “when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.”
- The Yukon Supreme Court issued, on June 26, 2023, a two-paragraph practice direction on the use of “AI tools in court submissions,” directing that “if any counsel or party relies on artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter and in any form before the court, they must advise the court of the tool used and for what purpose.”
- All three court levels in Alberta issued a joint notice to the profession Oct. 6, 2023 on “Ensuring the integrity of court submissions when using large language models.” The tri-court notice does not mandate disclosure of AI usage in court proceedings, but: urges “practitioners and litigants to exercise caution when referencing legal authorities or analysis derived from large language models [e.g. ChatGPT] in their submissions;” requires that “parties rely exclusively on authoritative sources such as official court websites, commonly referenced commercial publishers, or well-established public services such as CanLII” in all references to case law, statutes or commentary in representations to the courts; and requires that “any AI-generated submissions must be verified with meaningful human control.”
- The Quebec Superior Court’s Oct. 24, 2023 notice to the profession and public on the “Integrity of Court Submissions When Using Large Language Models” is similar to Alberta’s notice, including in its emphasis on keeping a “Human in the loop.” That is, “in the interest of maintaining the highest standards of accuracy and authenticity, any AI-generated submissions must be verified with meaningful human control. Verification can be achieved through cross-referencing with reliable legal databases, ensuring that the citations and their content hold up to scrutiny. This accords with the longstanding practice of legal professionals.” As did the Federal Court and Alberta’s courts, the Superior Court of Québec said it “recognizes that emerging technologies often bring both opportunities and challenges, and the legal community must adapt accordingly. Therefore, we encourage ongoing discussions and collaborations to navigate these complexities effectively.”
- The Provincial Court of Nova Scotia’s Oct. 27, 2023 notice on “Use of AI and protecting the integrity of court submissions in provincial court” mandates that “any party wishing to rely on materials that were generated with the use of artificial intelligence must articulate how the artificial intelligence was used.” The court “encourages” counsel and litigants “to exercise caution when relying on reasoning that was ascertained from artificial intelligence applications. Moreover, it is expected that all written and oral submissions referencing case law, statutes or commentary will be limited to accredited and established legal databases.”
Photo of Chief Justice Paul Crampton by Andrew Balfour Photography
If you have any information, story ideas or news tips for Law360 Canada, please contact Cristin Schmitz at cristin.schmitz@lexisnexis.ca or call 613-820-2794.