Feds release report on stakeholder concerns regarding copyright and generative AI policies

By Anosha Khan ·

Law360 Canada (February 12, 2025, 5:20 PM EST) -- A federal government report on how copyright should be protected from potential threats posed by generative AI (artificial intelligence) reveals sharply divided views among industry stakeholders. 

The What We Heard Report: Consultation on Copyright in the Age of Generative Artificial Intelligence was released by Innovation, Science and Economic Development Canada (ISED) on Feb. 11. The ISED consultation was conducted between October 12, 2023, and January 15, 2024 “to better understand the effects of generative artificial intelligence (AI) on copyright and the marketplace,” the department said.

The report sought feedback on copyright policy relating to three areas: “the use of copyright-protected works in the training of AI systems, notably for text and data mining (TDM) activities; authorship and ownership rights related to AI-generated content; and questions of liability, notably when AI-generated content infringes copyright.”

According to the report, an increasing number of stakeholders in the cultural industries were concerned about generative AI’s impact because they see current AI practices undermining copyright protection, including creators’ ability to consent and be credited and compensated for their work. As well, stakeholders from technology industries expressed concerns about the uncertainty of the application of the copyright framework to AI systems, possibly reducing opportunities for AI development in Canada.

Stakeholders had divided views on TDM. Many stakeholders “emphasized the critical importance that rights holders can consent to the use of their copyright-protected works in TDM activities and be remunerated for it, citing significant risks to creators' rights without such protections.”

A smaller group argued that TDM activities “simply involve machine-learning facts, statistical patterns or other data from works, rather than reproducing the works or consuming their expressive content,” saying that the use of copyright-protected works for this purpose was non-expressive and does not engage copyright law.

Further, creators and the cultural industries posited that unauthorized, unlicensed and uncompensated use of copyright-protected works in TDM activities and training of AI is a violation of their economic and moral rights under current copyright law.

These stakeholders considered licensing to be a viable option for TDM activities on copyright-protected works, noting that licensing markets for TDM uses are already being developed. They emphasized the “necessity of robust licensing frameworks to ensure fair compensation and enforcement mechanisms for creators.”

Some stakeholders also addressed the prospect of compulsory licensing for TDM purposes, which would involve the establishment of a royalty that users would need to pay to use copyright-protected works. The report said that many stakeholders expressed a preference for a voluntary or opt-in licensing model instead of an opt-out one.

“User groups, including the technology industries, were more likely to support clarifications to the law to facilitate TDM; the proposal for a copyright exception was the primary counterpoint to licensing,” it noted. “Some stakeholders expressed support for an infringement exception for the use of copyright-protected works in TDM and AI training.”

Stakeholders in sectors like education, libraries, archives and museums expressed that “contractual terms and technological protection measures may be used to impede users from benefiting from certain copyright exceptions.”

Certain stakeholders wanted to see clarifications stating that some uses, which they think do not reproduce or consume the expressive content of works, do not infringe copyright. Alternatively, a few stakeholders expressed openness for more limited exceptions for TDM purposes, including facilitating TDM for research or public interest purposes without infringing copyright laws.

“There was significant interest in developing transparency requirements (i.e. recordkeeping and disclosure requirements) surrounding the use of copyright-protected works in the training of AI,” according to the report, which stated that this was common among public interest stakeholders, legal practitioners and others.  

There was wide support for keeping human authorship central to copyright protection. Many submitted that “only AI-generated content with sufficient human contributions should be protected,” but it was also noted that “human authors must be allowed flexibility to use AI as a tool in their creation.”

Certain stakeholders had also suggested that the Canadian Intellectual Property Office should adopt a disclosure requirement for AI-generated elements of works in managing the registration system, similar to that adopted by the U.S. Copyright Office.

In addition, stakeholders addressed whether existing legal tests for infringement and existing remedies were sufficient to determine whether a piece of AI-generated content infringed copyright. They proposed that different parties that should potentially be held liable if AI-generated content infringes copyright. These included AI owners, developers, deployers and users. Some suggested joint liability for different actors along the AI value chain.

Barriers were also discussed, such as a lack of transparency, to determining whether an AI system “accessed or copied a specific copyright-protected work prior to generating an infringing output.”

The report noted that no Canadian court decision has examined the prospect of infringing AI-generated content. Therefore, existing legal tests and remedies for copyright infringement have not been applied by the courts in consideration of AI. Courts were noted to be in the best position to determine liability, while others urged that the government should provide more clarity.

Approximately half of the stakeholders who commented on this matter argued that current legal tests for copyright infringement were sufficient, with no unified calls for specific changes or submissions on specific gaps in the current legislation. Many stakeholders expressed interest for a legal requirement for transparency regarding AI-generated content, advocating for the labelling of mostly or fully AI-generated content to protect rights holders and consumers.

“Supporters of this idea submitted that labelling AI-generated content could allow consumers to know the source of the content they are viewing or reading, thus allowing them to choose whether to engage with AI-generated content.”

In addition, numerous participants in the consultation were concerned about the negative impacts of AI in relation to the labour market, with concerns about job security and unfair competition being prominent.

“Their responses revealed considerable concern over the potential economic effects of AI triggering job losses due to automation. They advocated prioritizing urgent policy initiatives to facilitate workforce protection and modernization.”

Participants were also concerned about potential misuses of AI that concerned participants that fell outside the copyright framework, including privacy violations, misinformation, propaganda, terrorism and a variety of criminal offences.

The consultation was conducted through an online questionnaire and virtual roundtable discussions. About 1,000 interested Canadians submitted responses to the questionnaire and 62 stakeholders participated in seven roundtables held during the consultation.

If you have information, story ideas or news tips for Law360 Canada on business-related law and litigation, including class actions, please contact Anosha Khan at anosha.khan@lexisnexis.ca or 905-415-5838.