“AI systems have the potential to have a transformative impact on our economy and our daily lives,” said Sam Andry, managing director at the Dais, a public policy think tank at Toronto Metropolitan University.
“But they also pose significant risks, including systemic forms of discrimination, psychological harm and malicious use. To reach AI’s full potential and increase adoption beyond current levels of only about four per cent of businesses, we need a responsible governance framework.”
Daniel Konikoff, the interim director of the CCLA’s privacy, surveillance and technology program, noted that the CCLA’s written submissions featured 23 recommendations that cover parts of the bill. Three specific recommendations focused on were the following:
- Bolstering the lack of rights protections in the bill, with respect to privacy as an issue that applies to both the privacy bill and also the Artificial Intelligence and Data Act (AIDA), as well as focusing on the importance and right to equality.
- Improving the lack of regulation around sensitive information including biometrics. This was said to be lacking in the bill.
- Discussing consent carve-outs on the basis of businesses’ legitimate interests, which was said to allow for the use of broad-sweeping exemptions to businesses’ capacity to conduct their business without acquiring consumer consent.
Tim McSorley, national co-ordinator of the International Civil Liberties Monitoring Group, a coalition of 45 Canadian civil society groups, said that both the Consumer Privacy Protection Act and AIDA contain “unacceptably broad exceptions to allow for the use of private personal information and of AI tools for national security purposes.”
Canada’s national security and anti-terrorism activities are already shrouded in secrecy,” he said. “And it is imperative that clear rules are put in place to protect fundamental rights and to defend against racial, political and religious profiling and discrimination.
“Unfortunately, much of Bill C-27 fails to do so. We are particularly concerned with the artificial intelligence and data provisions of AIDA with exempt AI tools developed by the private sector for exclusive use by federal national security agencies from any regulation whatsoever.”
Such agencies included the Department of National Defence, the Canadian Security Intelligence Service and the Communications Security Establishment. He said that the Act was developed without adequate public consultation.
Sharon Polsky of the Privacy and Access Council of Canada said that rather than the bill looking at Canadians as individuals with fundamental rights to privacy, it frames them as consumers, where businesses can make decisions on whether the information they collect is going to have an impact on Canadians.
She went on to say that the bill allows “the same vague language that we’ve had in PIPEDA for the last 20 years.” It also does not keep up with international law of the U.S. or the EU and the U.K. She hoped that the legislators take the concerns seriously and “give sober thought to the ramifications and the unintended consequences that will happen” not only to their electorate, but also their children, as individuals continue to be subjected to facial recognition and AI of all sorts that may harm in the name of economic gain.
Andry noted that AI systems occur at broader group and community levels with collective harms being manipulative and exploitative and gave examples of election interference, harm to the environment and collective harms to children, which fall outside of definitions given in the bill.
Further, he said the regulatory model did not create sufficient independence from the innovation, science and economic development minister, and that the proposed commissioner needs to be independent of the minister through parliamentary appointment as the minister “has both competing roles of championing the economic benefits of AI while regulating and enforcing its risks.”
As well, the current bill applied only to the private sector. While it has been proposed it would apply to public sectors, it could create a double standard. He said AI regulation needs to be developed holistically and that “Canada’s investments in developing AI systems, which it's done quite successfully, have not been yet matched by a comparable effort to regulate its risks.”
All speakers noted the lack of consultation with the public and civil liberties group on the matter.
“We must see AI developed responsibly in Canada in a way that ensures that human rights and civil liberties are protected,” McSorley concluded.
If you have information, story ideas or news tips for Law360 Canada on business-related law and litigation, including class actions, please contact Anosha Khan at anosha.khan@lexisnexis.ca or 905-415-5838.