Legal Pros Grapple With Best Use Of AI As Clients Divide

This article has been saved to your Favorites!
BigLaw attorneys and in-house counsel speaking at the annual Berkeley Law AI Institute on Thursday talked about how they've recently grappled with using the tools known as artificial intelligence in representing clients, saying some clients have either demanded or prohibited attorneys from using the tools, and others have taken seemingly contradictory positions.

The discussion came during a panel on practice innovations on the second day of the three-day event hosted by the University of California, Berkeley School of Law's executive education program and UC Berkeley's Center for Law and Technology.

Haynes and Boone LLP partner Eugene Goryunov told the more audience of than 100 legal professionals that law firm clients' reception of the tools has been inconsistent.

"Some clients say, 'We don't want AI used in our work, or in your firm at all.' Some clients say they want it. For us, there's a big tension in [figuring out] how we satisfy both camps."

To address clients' concerns, Goryunov's firm is testing possible solutions such as restricting how data at the firm is shared internally. Goryunov noted that there may be a good business reason for not commingling data, but often, clients who are opposed to the large language models, or LLMs, don't realize they can be configured so that client data input isn't being stored or used to train the LLM and come up in future outputs.

"It all comes back to data, data, data — data is king," he said.

Goryunov also pointed out that some courts require law firms to disclose if they used LLMs in their legal briefings, and Illinois requires disclosure if "an AI [LLM] model is involved in any way."

John LaBarre, general counsel at Counsel AI Corp., maker of the Harvey chatbot that's focused on legal queries, recalled a client who didn't want to use LLMs at all but then sent out a request for proposal that asked firms to explain how they would use the tools if hired.

"There's confusion," he said, agreeing with Goryunov that many clients aren't aware of the LLMs' data and privacy restrictions.

LaBarre added that, anecdotally, Harvey's chatbot saves its law firm customers up to 10 hours of attorney work per week.

During the discussion, Morrison Foerster LLP partner Tessa Schwartz recalled how she once convinced a client who didn't want the firm using AI tools to change its policies, and she noted that chatbots can save experienced attorneys time and their clients money, and they can also be useful legal training tools for early-career attorneys, since experienced legal practitioners can often point out to junior associates where errors were made.

However, Schwartz acknowledged that junior associates aren't always able to catch errors in memos and briefs produced by LLMs.

"As a profession, I think we need to think about what we're going to do with our junior lawyers, and the answer isn't that they're going away," she said.

During another session Thursday with counsel from Google LLC, Cisco Systems Inc. and SoftBank Group International, SoftBank's chief compliance officer, Brendan Kelleher, and other SoftBank in-house counsel explained how the venture capital firm has created its own vetting process to assess the tools and the risk they may pose to a company.

In choosing whether to invest in a startup, SoftBank requires companies to make representations and warranties disclosing their uses of AI, along with any datasets used to train those tools and chatbots. Those disclosures are important given the uncertainties of ongoing high-stakes copyright litigation by writers and content creators against LLM makers like Microsoft Corp.-backed OpenAI Inc. and Facebook's Meta Platforms Inc., the SoftBank counsel said.

SoftBank also requires companies to disclose whether their AI tools have adequate protections or infringe IP, as well as whether their products use any third-party tools that could pose similar risks.

Kelleher said the goal of SoftBank's "rep and warranties" demands and its due diligence process is to hold companies accountable for their use of AI, but he also noted that SoftBank also requires that companies use AI to stay competitive.

"We made clear [to companies SoftBank invests in] you need to be using this [tool] and finding efficiencies," he said.

Throughout the discussion, Google general counsel Halimah DeLaine Prado offered tips on how in-house legal teams can help ensure that their company's products are complying with the latest laws. To that end, she encouraged in-house counsel to ask their colleagues many questions and the right questions, particularly when it comes to AI, because "the notion is you have to be able to deal with those unintended consequences."

Prado stressed that in-house counsel must work directly with product developers, engineers and other corporate team members to understand at the early stage the technologies being developed.

She also encouraged in-house counsel to work with corporate boards to create a standardized list of questions to ask engineering and product design teams, and she recommended attorneys also educate their colleagues about regulators' concerns, the risks of AI and the changing regulatory landscape in various jurisdictions, so that engineers and designers can create products that comply with the law from their inception.

"The ground is sort of changing under our feet on all of these issues," she said. "It requires a multi-stakeholder approach."

Cisco general counsel Leah Waterland agreed that an in-house legal team should be treated as a partner, "as opposed to legal as an obstacle," and she noted that with the fast development of AI, many corporate teams are already bringing in-house lawyers into discussions where they hadn't before.

"I have found that lawyers are being invited to the table earlier than ever before," she said. "We're entering rooms we haven't been in before."

The panelists echoed comments made by venture capital firm partners on the first day of the conference.

During that discussion Wednesday, SoftBank Investment Advisers partner Karol Niewiadomski and Sapphire Ventures' Cathy Gao said they expect in-house counsel at AI companies to play a bigger role in their businesses due to regulatory uncertainties, and a professor who helped pioneer the technology warned that transparency of commercial AI businesses should be "top of mind."

The event will continue Friday morning with Federal Communications Commission Chair Jessica Rosenworcel, who is expected to give a talk on transparency issues involving AI.

--Editing by Brian Baresch.


For a reprint of this article, please contact reprints@law360.com.

×

Law360

Law360 Law360 UK Law360 Tax Authority Law360 Employment Authority Law360 Insurance Authority Law360 Real Estate Authority Law360 Healthcare Authority Law360 Bankruptcy Authority

Rankings

NEWLeaderboard Analytics Social Impact Leaders Prestige Leaders Pulse Leaderboard Women in Law Report Law360 400 Diversity Snapshot Rising Stars Summer Associates

National Sections

Modern Lawyer Courts Daily Litigation In-House Mid-Law Legal Tech Small Law Insights

Regional Sections

California Pulse Connecticut Pulse DC Pulse Delaware Pulse Florida Pulse Georgia Pulse New Jersey Pulse New York Pulse Pennsylvania Pulse Texas Pulse

Site Menu

Subscribe Advanced Search About Contact