Governance, artificial intelligence | Connie L. Braun and Juliana Saxberg

By Connie Braun and Juliana Saxberg ·

Law360 Canada (December 18, 2023, 1:43 PM EST) --
Connie Braun
Connie Braun
Juliana Saxberg
Juliana Saxberg
With widespread adoption of ChatGPT and other generative artificial intelligence (AI) products in the past year or so, humanity is being exposed to risk like never before. Every day, there are remarkable developments with jaw-dropping news and opportunities that explore what AI can do. Concerns about possible rogue behaviours have many people mistrusting AI, fearing that humans will not be able to control the technology. Even so, there is enormous potential to achieve efficiencies with the help of AI in most every area of business, industry, or profession.

Before the chaos and commotion get away from us, though, we need our governments, supported by academia and the business and technology communities, to establish a posture of governance that allows us to safely take advantage of and manage all that AI offers. This kind of governance focuses on how data is managed as well as the processes used for that management. By developing a strategic approach to governance for AI, we will all be able to breathe more easily and embrace the innovative technology.

What might such governance look like?

1. Legislation such as the proposed Artificial Intelligence and Data Act (AIDA), part of the Digital Charter Implementation Act, will set the baseline for responsible design, development and deployment of AI systems in Canada. Accompanying this anticipated legislation is the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, released in September 2023. Going beyond our borders to embrace the G7’s Data Protection and Privacy Authorities’ Statement on Generative AI, and engaging fully in The Global Partnership on Artificial Intelligence.

2. Developing stringent policies to safeguard data and to administer penalties when AI is misused. As an asset, data is incredibly valuable and needs to be consistently trustworthy. Finding ways to secure and protect it should be the first priority. Then, as we integrate AI systems, vigorous data protection measures and compliance protocols have to be implemented and constantly improved. Risk assessments occur regularly, with unique and creative mitigations in place to prevent potential breaches and to ensure data integrity.

3. Accountability as business and technology communities streamline processes and operations, undertaking careful assessment of existing processes, identifying areas for AI application and meticulously planning seamless and efficient transitions. Add to this abundant transparency that publishes enough information to enable consumers to make informed decisions. Providing appropriate levels of detail that allow experts to evaluate whether risks have been considered adequately.

4. To uphold governance, adherence to ethical and legal standards is paramount. Maintaining the highest standards of governance means that AI algorithms must be transparent, AI models must be explainable, and AI strategies for language models must curtail bias. Human oversight of training models and algorithms, along with monitoring after deployment, ensures the implementation of updates as soon as any risks are identified.

5. The adoption of AI calls for a cultural shift in which innovation and continuous learning are adopted and incorporated. By empowering employees to adapt and upskill, a culture of curiosity is fostered, and more opportunities for fairness and equity to be assessed and addressed become available. This includes engaging far beyond typical Anglo-centric values and prejudices. Cultivating preparation thoroughly in this way results in readiness for AI-led transformations while monitoring for inappropriate or malicious use of the system.

6. Systems operate as designed and intended, with validity and robustness. Various testing methods are used across a spectrum of tasks and contexts to measure performance and tenacity. While cyberattacks are to be expected, systems are secure and maintained, as demonstrated by adversarial testing that identifies vulnerabilities. Responses to the range of tasks or situations to which they are likely to be exposed are understood and can be explained. Cybersecurity assessments are performed regularly, and appropriate measures are taken to mitigate risks. Measuring the model’s performance against recognized standards often, implementing necessary controls and changes to meet or exceed these imperatives.

Canada is among the first countries in the world to propose a law to regulate AI. The Artificial Intelligence and Data Act aims to provide a balanced approach to governance that supports responsible innovation across the business landscape, no matter what size or market share. To accomplish this, clear guidelines will help businesses to innovate and develop achievable AI systems that are both safe and secure. The development and evolution of the regulatory framework will include consultation with AI industry leaders, scholars, business owners and other experts, along with any members of the public wishing to contribute. Overseeing all of this activity will be a new AI and data commissioner, empowered to monitor compliance and intervene when necessary to ensure that AI systems remain safe, equitable, unbiased, fair and impartial.

The time to act is now, but we must do so responsibly and in ways that allow us to adopt AI strategically whilst mitigating risks. We can harness the full potential of AI while preserving what makes a culture, an organization, or a business unique and successful. Let us seize the moment and look forward to shaping a future where AI complements, augments, promotes and elevates our capabilities.

Connie L. Braun is a product adoption and learning consultant with LexisNexis Canada. Juliana Saxberg is a practice area consultant, corporate & public markets with LexisNexis Canada.
 
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and 


Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.