Beyond ethics and governance: When AI should (and shouldn’t) be used in law | Daniel J. Escott

By Daniel J. Escott ·

Law360 Canada (February 10, 2025, 12:17 PM EST) --
Daniel J. Escott
Artificial intelligence has entrenched itself in the legal profession, with firms and courts adopting AI-driven tools for legal research, document automation, and even decision-making support. In response, the legal industry has rushed to develop ethical frameworks and governance models to ensure responsible AI use. Yet, for all the effort put into these guidelines, they leave an essential question unanswered: When should lawyers actually use AI? And, just as importantly, when shouldn’t they?

The legal profession has treated AI as something to be governed rather than something to be strategically deployed. As a result, lawyers and legal institutions often find themselves in a proverbial no man’s land: technically allowed to use AI under ethical frameworks and new governance models, but uncertain whether doing so is truly beneficial, practical, or even responsible. If AI in the practice of law is to be more than just a compliance checkbox, we must move beyond ethical frameworks and governance models and begin making decisions about its actual value in practice.

Frameworks mitigate risks, but they do not enable use


Ethical AI frameworks are designed to mitigate risk but do little to guide legal professionals' practical use of AI. These frameworks tend to focus on:

  • Bias mitigation: ensuring AI does not reinforce systemic discrimination;
  • Transparency: making AI-assisted decision-making explainable;
  • Accountability: determining who is responsible when AI gets it wrong; and,
  • Confidentiality: ensuring client data is protected.

These are all valid concerns, but they are reactive rather than strategic. More complicating, many of these concerns can’t even be addressed by lawyers at the “use” stage of the AI lifecycle. Many of these concerns need to be addressed by developers, not users. These frameworks tell us what to avoid, but not what to pursue. Lawyers do not need more checklists of what not to do with AI. They need clarity on when AI is genuinely valuable to their practice, and when it is an unnecessary distraction or even a liability.

When AI is actually worth it

The legal profession has adopted AI with a mix of excitement and skepticism. While some lawyers see AI as an efficiency revolution, others remain skeptical, citing concerns about accuracy, over-reliance, and ethical implications. However, several AI tools have proven themselves useful in specific legal tasks, particularly those that involve:

1. High-volume, low-complexity tasks: AI excels at automating repetitive processes such as contract review, due diligence, and legal research. These are areas where AI-driven tools, like predictive coding in e-discovery, have already demonstrated their ability to outperform human lawyers quickly and consistently.

2. Pattern recognition and analytics: AI can analyze vast amounts of case law, regulatory filings, and market trends to provide insights that would take human researchers days or weeks. This is particularly useful for litigation strategy, where AI can identify patterns in judicial decisions that may inform case planning.

3. Client intake and administrative efficiency: AI-driven chatbots and document automation tools can streamline client interactions, freeing up lawyers to focus on substantive legal work rather than administrative tasks.

4. Legal research assistance: While AI should not replace human legal reasoning, and the accuracy and reliability of more “mainstream” tools remain dubious, there are several commercially available tools that excel in serving as an effective research assistant, helping lawyers read and find relevant cases, statutes, and legal precedents faster.

In these areas, AI is not merely an option but a competitive necessity. Lawyers who fail to adopt AI tools for these purposes risk inefficiency, higher costs, and ultimately, diminished client service. As these developments continue to accelerate, it becomes more and more likely that regulators and liability insurers will come down with policies that distinguish between lawyers and law firms that use AI and those that do not.

When AI isn’t worth It, even if allowed

While ethical frameworks might permit AI’s use in most areas of legal practice, there are several instances where its deployment is questionable at best:

1. Client-sensitive legal reasoning: Even advanced models like ChatGPT o3 still cannot replace the nuanced decision-making that comes from human legal reasoning. Areas such as settlement negotiations, strategic litigation decisions, and advocacy require human judgment that AI cannot replicate at this stage.

2. Factually complex or evolving legal issues: AI largely relies on existing data to make predictions and recommendations. AI is often unreliable in emerging areas of law, such as cryptocurrency regulation, new privacy laws, or Indigenous legal rights because it lacks a sufficient historical dataset.

3. High-risk decision-making: Courts, regulators, and oversight bodies are increasingly scrutinizing AI-assisted decisions. If a firm is involved in high-stakes litigation, regulatory compliance, or other areas where errors have significant consequences, relying on AI-generated legal conclusions is a liability.

4. Ethical or reputational risk: Even when AI is technically permitted under ethical guidelines, its use may be perceived as inappropriate or untrustworthy. Judges, clients, and opposing counsel may question the legitimacy of AI-driven contracts or legal arguments, leading to reputational harm or judicial skepticism.

A call for AI strategy, not just AI governance

If AI is to be more than a trendy buzzword in legal practice, lawyers must move beyond ethical frameworks and governance models to focus on strategic decision-making. Instead of asking, “Is AI allowed?” lawyers should be asking:

  • Does AI provide a meaningful advantage in this context?
  • Does AI improve efficiency without sacrificing quality?
  • Are the risks of AI manageable, or do they outweigh the benefits?

A governance framework should not be an excuse for blind adoption. Just because AI is permitted does not mean it is always the right tool for the job. Lawyers must assess AI’s value proposition on a “use case — by — use case” basis, balancing efficiency against risk and considering the broader impact on legal practice and justice.

Lawyers as decision makers, not just AI users

The legal profession has spent too much time debating whether AI should be permitted and not enough time determining when it should actually be used. Lawyers are decision-makers, not just AI users, and they must approach AI with the same level of strategic thinking that they apply to their legal work.

The future of AI in law will not be shaped by ethical frameworks alone but by the choices lawyers make about when and how to use these tools. Those who integrate AI thoughtfully, deploying it where it enhances legal work and rejecting it where it adds risk or redundancy, will lead the profession into a future where AI is not just governed but strategically leveraged for the benefit of clients, the profession, and the justice system itself.

Daniel J. Escott is a research fellow at the Artificial Intelligence Risk and Regulation Lab and the Access to Justice Centre for Excellence. He is currently pursuing an LL.M. at Osgoode Hall Law School, and holds a J.D. from the University of New Brunswick and a BBA from the Memorial University of Newfoundland.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.   

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.