Duty to give AI reasons: Explainability at work

By Daniel J. Escott ·

Law360 Canada (February 18, 2025, 11:06 AM EST) --
Daniel J. Escott
Procedural fairness is the cornerstone of any legitimate legal system. It ensures that legal proceedings are conducted with transparency, equity and respect for the rule of law. In an era where artificial intelligence (AI) is becoming increasingly integrated into judicial and administrative decision-making, two principles are emerging as non-negotiable requirements of procedural fairness: explainability, and the “human-in-the-loop” (HITL) principle.

Despite the efficiencies that AI brings to legal processes, there is growing recognition that automated decision-making and algorithmic bias present serious risks to the integrity of judicial independence and procedural fairness. AI-generated decisions that lack explainability, due to either their “black box” nature or their lack of meaningful human oversight, risk undermining core principles of procedural fairness by obscuring accountability, inhibiting meaningful review and eroding public confidence in the administration of justice. The necessity of these principles is not an innovation brought about by AI; rather, they represent the latest iteration of long-standing legal requirements that courts must adhere to in order to ensure fair adjudication.

Explainability as a procedural fairness imperative

Explainability, in its simplest form, is the requirement that decisions, whether rendered by humans or AI, must be understandable, justifiable and subject to meaningful review. Within Canadian administrative law, the duty
Human Face

Andrey Suslov: ISTOCKPHOTO.COM

to provide reasons is deeply embedded in the principles of procedural fairness. The Supreme Court of Canada has consistently underscored this requirement, from Baker v. Canada (Minister of Citizenship and Immigration), [1999] 2 S.C.R. 817 to Canada (Minister of Citizenship and Immigration) v. Vavilov, 2019 SCC 65, where it was affirmed that the legitimacy of a decision depends not just on its outcome but on the reasoning that supports it.

AI introduces a challenge to this foundational requirement. Many AI-driven tools operate in ways that are opaque to both users and those affected by their outputs. Machine learning algorithms, particularly those based on deep learning, often lack clear mechanisms to explain how a particular conclusion was reached. This raises an immediate conflict with the procedural fairness doctrine: a decision that cannot be explained is a decision that cannot be properly contested.

As the Canadian Judicial Council’s “Guidelines for the Use of Artificial Intelligence in Canadian Courts” rightly point out, explainability is akin to the existing requirement for judges to provide reasoned explanations for their decisions in law. The obligation to provide reasons serves multiple purposes: it allows litigants to understand the rationale behind decisions affecting them, facilitates judicial review and reinforces public confidence in the justice system. If AI is to play any role in decision-making processes, it must comply with these long-standing principles. The implementation of AI in law, therefore, cannot be limited to efficiency gains but must also ensure that AI-generated decisions meet the same standard of transparency and reason-giving as human decision-makers.

Safeguarding procedural integrity with the human-in-the-loop (HITL) principle

Complementing explainability is the HITL principle, which requires human oversight at key decision-making junctures. The Canadian legal system operates on the assumption that decision-makers must exercise discretion in a manner that is reasoned, impartial and consistent with the law, whether they be judges, tribunal members or administrative officers. If AI is used to assist in adjudication, it must function within a framework that preserves the discretionary authority of human decision-makers.

The Federal Court’s Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence explicitly embrace the HITL principle, emphasizing that AI tools should be used as support mechanisms rather than as replacements for human judgment. Similarly, the Action Committee on Modernizing Court Operations reinforced in their guidelines on the “Use of Artificial Intelligence by Courts to Enhance Court Operations” that human oversight is critical in all stages of AI deployment, ensuring that results are validated and any necessary corrections are made. These guidelines reinforce the broader jurisprudential position that decision-making authority cannot be delegated to an unaccountable algorithm.

In practical terms, the HITL principle mandates three essential safeguards:

1. Active human oversight: AI-generated outputs must be reviewed and verified by a human decision-maker before they are acted upon. This aligns with the duty to provide reasons and ensures that AI does not autonomously determine legal rights and obligations.

2. Transparency in AI use: Courts and administrative bodies must disclose when and how AI is being used in decision-making processes. This is necessary not only for procedural fairness but also for maintaining public confidence in the justice system.

3. Corrective mechanisms: Where AI-generated outputs produce erroneous or biased results, human decision-makers must retain the ability to override or correct these determinations. AI should assist in legal reasoning, not replace it.

Humans in the loop bear the explainability burden

If the HITL principle and explainability can both be “read into” the existing procedural fairness framework, what does this mean on a practical level? Well, a key element of procedural fairness must be that the human overseeing an AI tool bears the burden of ensuring, and transparently conveying, its explainable analysis. This means that the individual responsible for using AI-generated content or recommendations in their decisions must be able to understand and articulate the tool’s reasoning behind its outputs. It is not sufficient for a decision-maker to accept and adopt an AI-generated analysis or recommendation without comprehension or justification.

Judges, tribunal members, and administrative decision-makers cannot shift responsibility onto AI. They must possess the knowledge and expertise to critically assess the AI’s recommendations, question its assumptions, and identify potential errors or biases. This requirement aligns with the long-standing duty to provide reasoned decisions: if an AI tool’s outputs cannot be justified in a manner consistent with legal reasoning, they cannot be relied upon in making determinations that affect legal rights.

Moreover, ensuring explainability is not just an internal requirement; it is necessary for public confidence. A litigant who receives an adverse decision must be able to challenge it through judicial review or appeal. If the decision-maker cannot explain the rationale because it was generated by a process that uses an opaque AI tool, the right to procedural fairness is fundamentally undermined.

To meet this obligation, legal professionals must be equipped with the tools and training necessary to understand AI’s role in decision-making. Judicial education initiatives should include instruction on AI functionality, algorithmic biases and best practices for reviewing AI-assisted recommendations. Only then can the HITL principle function as a meaningful safeguard for procedural fairness.

Daniel J. Escott is a research fellow at the Artificial Intelligence Risk and Regulation Lab and the Access to Justice Centre for Excellence. He is currently pursuing an LL.M. at Osgoode Hall Law School, and holds a J.D. from the University of New Brunswick and a BBA from the Memorial University of Newfoundland.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.   

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.