Navigating use of artificial intelligence in courtroom | Jason Cassidy

By Jason Cassidy

Law360 Canada (November 29, 2023, 12:05 PM EST) --
Jason Cassidy
Jason Cassidy
Generative artificial intelligence (AI) promises to unleash unprecedented efficiency gains and workplace productivity for the legal sector, yet there is a high degree of uncertainty about whether the industry is ready for this digital transformation.

Lawyers, judges and paralegals are by their very nature risk-averse. This is understandable, given the importance of their jobs and how much is at stake. One small misstep could lead to life imprisonment, family dissolution, precarious child custody arrangements, business closures, or sweeping political reforms.

And missteps have happened when lawyers have tried to use AI in the courtroom. Earlier this year, a U.S. judge tossed a lawsuit and issued a $5,000 fine to lawyers who submitted fictitious research as part of an aviation injury claim that had been created by ChatGPT, a generative AI platform. ChatGPT had made up the details of six cases that were referenced in the firm’s legal brief. The lawyers had not done their due diligence in fact-checking their documents before submission.

But this fear of what “could” happen can also stand in the way of much-needed progress for an industry that has been drowning in paperwork for decades.

Take for instance the discovery process, whereby lawyers have to sift through thousands, if not millions, of documents to determine which ones could be relevant to a case. AI and machine learning can be integrated by law firms to expedite this process. When trained, the AI algorithm can determine the significance of documents and also which ones may be subject to special considerations. Given the exponential increase in the size of the document universe, the use of AI to help with document analysis could not come at a better time.

AI can also help with legal research, which is one of the most time-consuming and costly tasks for the legal profession. It requires finding, analyzing and synthesizing relevant information from various sources including statutes, case law, regulations, journals and news articles. Firms can train AI to pull the relevant information and case law, helping to free up law professionals for critical thinking. For instance, case law focused on foundational principles may be important, but a more recent case with similar contextual considerations may be more impactful in persuading a jury. This consideration requires professional expertise.

It could also open the door to more equitable legal representation. Legal fees are expensive and often out of reach for lower income earners, who have to rely on legal aid defenders that may not have the time or resources to come up with a sound legal strategy. San Francisco-based startup DoNotPay has been pushing for AI-trained “robot lawyers” to offer real-time legal advice to defendants in courtrooms. While this move received significant pushback from state bars and district attorneys’ offices, there is an appetite for more accessible legal advice and AI-based companies are looking for opportunities to enter the space.

Lawyers could also benefit from AI’s ability to predict outcomes of certain cases. By scouring through the historical details and judges’ decisions on past cases, AI can provide guidance to help lawyers determine where to invest time and resources. AI tools are also available to judges, offering advice on bail and sentencing decisions. For example, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) can be used by criminal judges in many U.S. states to assess the likelihood of a defendant reoffending, as well as offer guidance on sentencing or parole release.

While there are clear advantages to these AI-driven insights, there are important ethical considerations that must be considered. The legal system has historically been prone to racial biases and inequalities. Since historical data is used to train AI systems, it too may be inherently biased, perpetuating injustices by disproportionately flagging racialized groups as more likely to “reoffend.”

ProPublica conducted a study with 7,000 people who had been arrested in Florida and used an algorithm to determine who was likely to commit violent crimes. Only 20 per cent of the people predicted to commit violent crimes actually went on to do so and these numbers predominantly targeted Blacks as potential reoffenders. White people were consistently seen as “low-risk.”

There are also the issues of privacy and accountability when it comes to using AI in the legal profession. Lawyers have an ethical obligation to maintain client confidentiality.

Since AI systems rely on vast amounts of data, including highly confidential information, lawyers have to consider the additional risks that come with sharing and storing data with third-party AI providers or cloud services and whether it breached client confidentiality.

Take ChatGPT, for example. ChatGPT’s chat history is accessible and viewable by the company’s employees. Also, the company behind ChatGPT, OpenAI, may provide personal information to third-party vendors, all of which may be susceptible to data breaches.

While it is essential to address ethical and regulatory considerations to ensure responsible use of AI in the courtroom, the benefits of its use are undeniable. The professional will always require human judgment and thoughtful analysis. Striking the proper balance between the capabilities of AI and the expertise of legal professionals will be crucial for a successful industry transformation.

Jason Cassidy is founder and CEO of Shinydocs, a Kitchener-Ont.,-based data management solution. He is a recognized leader in preparing digital content for privacy, security, artificial intelligence and information governance. Jason currently sits on the board of directors for AIIM International and is a speaker and thought leader at information events and conferences worldwide.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, LexisNexis Canada, Law360 Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Richard Skinulis at Richard.Skinulis@lexisnexis.ca or call 437-828-6772.