Dis-/misinformation, artificial intelligence | Connie L. Braun and Juliana Saxberg

By Connie L. Braun and Juliana Saxberg ·

Law360 Canada (February 6, 2024, 10:33 AM EST) --
Connie Braun
Connie Braun
Juliana Saxberg
Juliana Saxberg
Among the grave and main concerns about artificial intelligence (AI) is the potential for significant amounts of bad, false, incorrect information to be generated. Despite the likely value to companies, the possibility of dis- and misinformation can make these companies skittish about adopting AI and machine learning. In some ways, this skittishness is not surprising, given the high-profile news stories exposing challenges of all kinds that have been reported since the launch of ChatGPT 3.0 in November 2022.

While it is true that AI algorithms can help to sharpen decision-making, automate key business workflows, and save time, the algorithms can do this only if organizations understand how AI algorithms work and if they take steps to minimize AI risks.

Dis-/misinformation is readily available and spread by many, potentially making it exceedingly difficult to trust AI. As these terms are not synonymous, here are some definitions:

  • Disinformation: false information that is created and spread with the intent to mislead or deceive
  • Misinformation: false information that is spread, usually without intent to mislead or deceive

As you can see, the main difference between the two is intent. Today, there is a great deal of both, with disinformation being used as an efficient manipulation technique where, if enough people believe it, misinformation starts to spread. Both dis- and misinformation significantly influence AI algorithms because of the data that is used to train large language models (LLMs). If the AI algorithms are not kept up to date, disinformation quickly turns into misinformation and spreads widely.

Generally speaking, AI algorithms comprise a set of instructions that allow machines to learn, analyze data and arrive at conclusions. Typically requiring human intelligence, AI algorithms have evolved to the point where they are able to recognize patterns, understand natural language, solve problems, and make decisions. The most important aspect of AI algorithms is probably the quality of the data that is being used.

In the Washington Post, Dec 17, 2023, Pranshu Verma wrote, “The rise of AI fake news is creating a ‘misinformation superspreader’,” in which he says,

Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.

AI-generated news is responsible for a lot of polarizing and misleading content, making it difficult to know what is true, and, along with it, what information can be trusted. The Global Risks Report 2024 as published by the World Economic Forum identifies false and misleading information boosted by AI as the top-most and immediate risk. The threats to democracy and society are advancing as rapidly as the technology is, creating new problems and enhancing existing ones. News stories of recent vintage focus on libraries becoming battlegrounds in which disinformation fast becomes misinformation as ignorant individuals demand removal of large numbers of books. These individuals are told and simply accept, without reading any of these books, that they are bad books because the content does not fit someone’s personal agenda.

The continuing proliferation of disinformation and misinformation hinges on the greater affordability and accessibility of generative AI. By lowering the cost of entry, disinformation campaigns can be and are being run by anyone. Quotidian examples include online discussions that are manipulated, deep fake photographs or videos that are created and released, not to mention censorship of speech, all of which are rampant. Missing or lagging governance and regulation of AI along with the necessary enforcement around the globe are simply making the whole situation worse.

At this time in society, there are many rogue players who would exploit generative AI in ways that will not make the world a better place. Rather, the goal of these bad actors seems to be to wreak havoc for all of us. To turn this around and improve generative AI, governments and tech companies are going to have to get involved in a big way. To name a few ideas, governments and tech companies will need to monitor the features of AI by feeding better and more data that allows the LLM to improve accuracy, improving the existing data by enriching it, regularly refining and upgrading the algorithms by adding new parameters that enhance performance, constant re-engineering of the features and refinement of the architecture.

Overarching everything, continuing investment in the “human in the loop” to augment and improve the intelligence of AI will be critical. When humans and AI augment and support each other, much can be achieved to build other forms of understanding, helping to make the data-driven AI and LLMs safer, more efficient, less biased, and able to address challenges that they otherwise would not recognize require attention.

Building trust in AI will be paramount and can be achieved only if ways are found to manage the systems so that they provide dependable, safe and unbiased information. Governments and big tech companies will need to minimize the amount of disinformation that leads to misinformation while continuing to allow AI systems to recognize patterns, understand natural language, solve problems and make decisions. Without this structure in place, what happens will likely make the world a very scary place to be.

Connie L. Braun is a product adoption and learning consultant with LexisNexis Canada. Juliana Saxberg is a practice area consultant, corporate & public markets with LexisNexis Canada.
 
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and 


Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.