Here's What Attys Need To Remember When Using AI

This article has been saved to your Favorites!
Several legal tech companies have recently launched generative artificial intelligence-powered tools, making it possible for attorneys to use this technology in their everyday practice, but legal experts caution that there are many legal issues to navigate when using these tools.

Experts say that some legal issues lawyers have to navigate when using generative AI tools include unauthorized practice of law, technology competence, client confidentiality and transparency.

Danielle Benecke, founder and global head of Baker McKenzie's machine learning practice, which launched in 2021, told Law360 Pulse that while generative AI, which is built from large language models, was advancing, the legal industry conversation around using these tools hadn't matured.

"If the legal industry is going to leverage large language models in a way that is not going to raise client confidentiality, privilege, bar rules, data governance, professional responsibility and other risks, then what does that look like?" Benecke said. "There's all sorts of ways you can go about it … but there really hasn't been any good practical industry conversation yet on what that looks like."

Since AI research company OpenAI launched its chatbot ChatGPT in November, interest in generative AI has exploded across industries, including the legal sector, leading to a multibillion-dollar investment in the company from Microsoft in January.

ChatGPT is a generative AI tool that can quickly write documents and answer questions like a human. The technology can write short stories, pen poems, give advice and brainstorm ideas for blog post headlines.

At least three contract management companies, Evisort, Lexion and Ironclad, have released new contract drafting tools using generative AI. Both Ironclad and Lexion used OpenAI's large language model, called Generative Pre-trained Transformer 3, which powers ChatGPT, to build their tools.

Attorneys have told Law360 Pulse that they are using generative AI in their practices, and some lawyers have even built their own tools using the technology. Earlier this month, Fisher Phillips announced that it had deployed legal research company Casetext's new AI legal assistant, CoCounsel, to its more than 500 attorneys.

Before attorneys use AI tools in their practices, they should make sure that they understand how the technology works to meet their tech competence ethical requirements, according to legal experts.

Avi Gesser, co-chair of Debevoise & Plimpton LLP's data strategy and security group, said that being tech competent when using AI means "understanding what these tools are good at, what they know, what they're trained on, how up-to-date they are and whether you could rely on them."

For example, ChatGPT was trained on a large dataset that only goes up to 2021, and it sometimes cites case law that doesn't exist.

Benecke added that attorneys needed to understand how AI tools process and store data to ensure that they were not breaking client confidentiality.

"Unless certain governance steps are taken, you can't use something like ChatGPT in client work because it by definition involves breaking client confidentiality and privilege obligations," she said.

According to the American Bar Association's Model Rule 1.1, maintaining competence as an attorney requires a lawyer to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology."

Cary Coglianese, law and political science professor at the University of Pennsylvania's Carey Law School, said that at some point in the next 20 years, when AI tools have proven to be reliable, attorneys will likely be required to use them.

Already, courts are finding the use of AI necessary. In 2018, a Canadian judge reduced attorney fees in a personal injury case, determining that AI could have been used to reduce time spent on legal research.

"If there is a valid AI tool that's able to perform a function more reliably, and in a reliably valid way that's superior to humans, then it would be a denial of due process for lawyers and judges not to use those tools," Coglianese said.

On the other hand, attorneys using AI tools could face potential claims of unauthorized practice of law.

Many states have rules that prohibit attorneys from engaging in the unauthorized practice of law modeled on ABA Model Rule 5.5, which addresses the issue.

In 2021, the Florida Supreme Court ruled that TIKD Services LLC, a startup using AI to help drivers fight traffic tickets, was engaged in the unlicensed practice of law and blocked the company from restarting operations in the state.

More recently, the self-described "world's first robot lawyer," DoNotPay Inc., was hit with a proposed class action in California state court for allegedly practicing law without a license. The company has also been accused of fraud by a paralegal in New York state court.

Gary Marchant, law professor at Arizona State University's Sandra Day O'Connor College of Law, said the ABA needed to update its model rules to address the issues that attorneys are facing when trying to incorporate AI in their practices and more clearly define the practice of law.

Now that AI is capable of doing many legal tasks, the practice of law is unclear, he said.

Without more legal guidance on the use of AI, attorneys are operating in a gray zone where they don't know what is expected of them, Marchant said.

"It's just not a good situation to have this uncertainty hanging over them about what they are expected to do," he said.

The ABA's ethics committee has created a working group that is considering Model Rule 5.5 changes to address issues that have been raised with the current rule.

Baker McKenzie's Benecke added, however, that international law firms could not rely solely on bar rules for navigating AI legal issues when they are operating globally.

When law firms are operating globally, they also need to consider the data governance and privacy laws in the countries they are operating in and their clients' preferences, Benecke said.

"There has to be participation and open conversation between firms, clients, legal technology vendors and alternative legal service providers because this type of problem cannot be solved on a provider-by-provider basis," she said. "It's an industrywide opportunity that requires collective problem-solving."

--Additional reporting by Carolina Bolado and Steven Lerner. Editing by Karin Roberts.

Correction: A previous version of this article misstated the year Baker McKenzie's machine learning practice was launched. The error has been corrected. 


For a reprint of this article, please contact reprints@law360.com.

×

Law360

Law360 Law360 UK Law360 Tax Authority Law360 Employment Authority Law360 Insurance Authority Law360 Real Estate Authority Law360 Healthcare Authority Law360 Bankruptcy Authority

Rankings

NEWLeaderboard Analytics Social Impact Leaders Prestige Leaders Pulse Leaderboard Women in Law Report Law360 400 Diversity Snapshot Rising Stars Summer Associates

National Sections

Modern Lawyer Courts Daily Litigation In-House Mid-Law Legal Tech Small Law Insights

Regional Sections

California Pulse Connecticut Pulse DC Pulse Delaware Pulse Florida Pulse Georgia Pulse New Jersey Pulse New York Pulse Pennsylvania Pulse Texas Pulse

Site Menu

Subscribe Advanced Search About Contact