Minimizing bias in GenAI interactions | Connie L. Braun

By Connie L. Braun ·

Law360 Canada (September 12, 2024, 10:51 AM EDT) --
Connie Braun
Connie Braun
During our lifetimes, every one of us has developed propensities and biases toward life, people, institutions and organizations — pretty much everything. These biases shape how we interact with technology and can be particularly apparent in our interactions with generative artificial intelligence (GenAI). Since the humans who make AI are biased, it may be natural to conclude that AI also is likely to be biased.

According to Wikipedia, algorithmic bias “describes systematic and repeatable errors in a computer system that create ‘unfair’ outcomes, such as ‘privileging’ one category over another in ways different from the intended function of the algorithm.” This kind of bias can develop because the data used to build AI models might contain perspectives, proclivities or points of view that may be exhibited and intensified in work product. Algorithmic bias must be monitored by creators of these systems and constantly assessed for risks and the potential impact on output provided to users.

People, on the other hand, are likely to demonstrate confirmation bias, something that individual users can learn and work to mitigate. Words matter, a lot. Confirmation bias reflects the human affinity to seek out and prefer information that confirms what we already believe despite contradictory evidence. Not surprisingly, and because we are human, this propinquity can generate confirmation bias, which may feed into how we communicate with and query AI systems. Consider that we might ask a question that contains biased information without realizing that is what we have done. Think about reviewing or comparing documents and asking questions that use our personal beliefs and perspectives. To use GenAI systems effectively, it is crucial to constrain bias by formulating precise and focused queries.

In the legal sphere, GenAI systems are being built with a focus on principles that help lawyers to work securely, more efficiently and effectively, and with greater accuracy. Legal research can be improved by employing a more conversational and interactive approach, reducing the attention paid to low-value tasks and uncovering insights that foster productivity. To be truly effective in the legal arena, though, GenAI systems need to be grounded in legal content including court decisions, legislation, secondary materials and other associated resources. Further, results need to be supported by verifiable, credible authority. Only then can security, effectiveness and accuracy be achieved.

The human aspect is always part of the equation, and there must be consideration of how to use a GenAI system in a way that minimizes the opportunity for bias to creep in. Formulating precise and focused queries that retrieve good results enhances the user experience by maintaining clarity, precision, and context.

Here are several ways to minimize bias when creating prompts in GenAI systems.

1. Ask open-ended questions and let the AI respond without the benefit of a preferred point of view stated.

Example: My client is a wind energy company that is facing several nuisance lawsuits over the recent installation of a large-scale wind farm near a residential area. The lawsuits cite turbine noise, eyesore flicker effects, ice throws and part maintenance as the main issues. What have Saskatchewan courts determined about whether wind turbines or wind farms near residential areas constitute a nuisance?

2. Be clear and concise when creating first drafts of memos, email messages, arguments, and more.

Example: My clients are thinking about importing a protected species from Indonesia into Canada. Draft an email message that outlines what happens when an individual attempts to illegally export a protected species from Indonesia and import that protected species into Canada. Include information on possible penalties.

Then, realizing that you may want to be even more precise or to include other details, enter a followup instruction, such as: “Rewrite the email, adding two sentences explaining to the client that lack of awareness of Canadian laws and regulations does not excuse a person from legal liability.”

3. Compare documents objectively by asking the GenAI system to provide an objective response that analyzes the key differences or identifies the pros and cons.

Example: Compare (summarize) the information in both documents and outline the differences in the information provided in bullet points.

4. Review and rewrite your prompts as necessary.

Early in your GenAI experience, you may want to work with a colleague or two to explore the best ways to write prompts. Words matter mightily when writing prompts.

5. Using integrated tools, validate and verify the results to ensure that the content produced by GenAI is accurate and unbiased.

So, while humans are filled with tendencies and bias about almost everything, a conscious effort to minimize bias, especially when using GenAI systems, is going to be more valuable and influential in providing salient information. The best thing any lawyer can do when using GenAI is to avoid writing prompts that contain leading language, conjecture, hypothesis or presumption. Impartial, objective writing will retrieve results that are untainted by human bias and provide lawyers with a truly insightful, high-quality and unbiased analysis.

Connie L. Braun is a product adoption and learning consultant with LexisNexis Canada.  
 
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is neither intended to be nor should be taken as legal advice.


Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.