Marco Falco |
AI has the potential to promote access to justice as well as efficiencies in the way disputes are resolved. That being said, without proper oversight, the use of AI in litigation is fraught with risk.
Litigators relying on AI, particularly for the more challenging tasks of written advocacy, could inadvertently mislead the court by misrepresenting the state of the law or evidence in a case. The capacity for AI to experience “hallucinations” during the preparation of a case is real. And the effects on the administration of justice could be catastrophic.
In this context, the need for guidelines in Ontario on the use of AI by lawyers, administrative decision-makers and judges has never been more pressing.
Delegation of administrative decision-making
One of the central tenets of administrative law is that access to justice is improved through the state’s delegation of decision-making authority to tribunals and others. In practice, there is an incongruity between aspiration and achievement, however. Canadian tribunals and administrative decision-makers, like courts, face heavy workloads and backlogs.
AI has the potential to alleviate the burden on the administrative state, by, among other things, assisting in the making of tribunal decisions. But how far can a tribunal go in relying on AI to assume its adjudicative function?
A recent decision of the Federal Court, Haghshenas v. Canada (Minister of Citizenship and Immigration), 2023 FC 464, sheds light on how Canadian courts may approach issues of fairness relating to the use of AI in administrative adjudication.
In Haghshenas, the Federal Court upheld a decision by a Canadian immigration officer denying the applicant a work permit. The decision, it appears, was made in part with the assistance of artificial intelligence software employed by the federal government and known as “Chinook.” In assessing both the reasonableness and fairness of the officer’s decision, the court upheld the use of Chinook in this context:
… the Applicant submits that the Decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree that the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness. …
The court further rejected the argument that the decision was unfair because it is unknown how machine learning “has replaced human input and how it affects [immigration] application outcomes.”
To date, the message appears to be that courts will tolerate AI as an aid to administrative decision-making, so long as the ultimate decision is made by a human. One can envision new circumstances, however, where the problematic aspects of machine learning, including biases and misinformation, could taint a tribunal’s reasoning. The issues raised by Haghshenas are ripe for further judicial consideration.
Potential to impair proper administrative of justice
Perhaps one of the most sensational AI mishaps to date is the case of a litigator’s reliance on ChatGPT in the decision of the United States District Court Southern District of New York, Mata v. Avianca, Inc., 2023, WL 4114965.
In Mata, the court imposed a joint penalty of $5,000 on both the respondents and their lawyers for making written submissions authored by ChatGPT which included “non-existent judicial opinions with fake quotes and citations created by artificial intelligence.” Respondents’ counsel exacerbated the situation by standing “by the fake opinions after judicial orders called their existence into question.”
Most notably, the court identified a number of harms to the administration of justice arising from the lawyers’ and court’s reliance on what amounted to fake legal decisions:
Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.
While the court was careful in Mata to address the lawyers’ primary gatekeeping function when relying on machine learning for assistance, the perils of a lawyer’s dependence on a language-processing tool like ChatGPT are so numerous that they may very well exceed a litigator’s ability to control them.
And busy litigators who may rely on machine learning as an efficient “shortcut” do so at the risk of undermining their professional reputations and the proper administration of justice. The mere existence of artificially produced “imposter” decisions threatens the integrity of judicial reasoning itself.
Manitoba and Yukon courts’ position on AI in litigation
Apart from expressing general concern about AI in jurisprudence, Canadian courts are just beginning to establish guidelines and parameters about machine learning in civil proceedings.
In Manitoba, the Court of King’s Bench has issued a Practice Direction, dated June 23, 2023, which addresses the questionable accuracy and reliability of artificial intelligence. The direction mandates that where AI has been employed “in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.”
Three days later, on June 26, 2023, the Supreme Court of Yukon echoed similar sentiments in its own AI Practice Direction. The court requires that any counsel or party that relies on AI, including ChatGPT or any other AI platform, for legal research or submissions “in any matter and in any form before the court” must advise the court “of the tool used and for what purpose.”
While practice directions in no way represent a thorough tool to address the potential problems AI may cause, they are very much a start.
Requiring counsel to disclose that machine learning was used in the development of arguments, or perhaps even the construction of evidence, at the very least flags the inherent inaccuracies and biases that could potentially bear on machine-based advocacy.
The way forward
While the bulk of ink spilled on AI in civil proceedings to date has focused on whether artificial intelligence platforms such as ChatGPT will replace litigators, that concern seems myopic.
As machine-learning becomes commonplace, its effect on the administration of justice presents a serious challenge to courts and lawyers alike. As set out above, the use of AI in administrative and civil proceedings raises a number of concerns including:
- The accuracy and reliability of evidence, materials and submissions drafted in part with the assistance of a machine — one that relies heavily on the Internet as the source of its knowledge and has the capacity to experience “hallucinations” in its reasoning;
- The potential for a lawyer or party to inadvertently mislead the court or tribunal;
- The biases inherent in artificial intelligence and the effect this could have on decision-making and outcomes in the dispute resolution process; and
- The cost of using AI on the training and development of a future generation of litigators.
None of these issues have simple answers and there is no one panacea to address them.
The need for guidance and instruction by the courts, with the potential of regulating the use of AI through the implementation of rules and directions, has never been more pressing.
Marco P. Falco is a partner and litigator at Torkin Manes LLP in Toronto who focuses on civil appellate litigation and judicial review.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, LexisNexis Canada, Law360 Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Richard Skinulis at Richard.Skinulis@lexisnexis.ca or call 437-828-6772.