The widespread availability of free and low-cost AI chatbots capable of generating sophisticated, human-like responses culled from heaps of data from open-web content has stirred debate and prompted deliberation within French-language law faculties over a host of issues, ranging from academic integrity or cheating, cognitive bias, privacy and security concerns, intellectual property rights and the benefits and risks of implementing AI tools in teaching and learning.
Some law faculties, while they have not closed the door on implementing their own internal policies, have, however, opted to wait for their institution to forge an establishment-wide AI policy. Others such as the Université du Québec à Montréal see no need for a university-wide policy or formal departmental guidelines governing the use of AI, other than the stipulation that students respect academic integrity regulations.
David Robitaille, University of Ottawa
All French-language law faculties have for now decided to leave to professors, for reasons of academic freedom, the decision over whether students can use the formidable new technology. Some academics have wholeheartedly embraced and incorporated AI into classes, believing that leveraging AI in the classroom will arm students with the necessary skills in an increasingly technologically-driven legal profession. Others, uneasy about duplicity, ethical concerns and its ineffectiveness and inaccuracy as a legal research tool, have flat out refused to allow its use.
“It’s a matter of academic freedom so if professors believe that it’s useful to use ChatGPT in their teaching, then they’re free to do so,” said Finn Makela, a labour law professor at the Université de Sherbrooke. “That being said, I have other colleagues who are like, “Nope, never.” And the university has come to the conclusion that the normal academic integrity guidelines against plagiarism and so forth apply to the use of ChatGPT.”
A similar approach has been adopted by the Department of Legal Sciences at the Université du Québec à Montréal (UQAM). Aside from a university-wide policy aimed at preserving academic integrity, there does not exist at UQAM a cross-disciplinary blueprint targeting the use of AI, said law professor Alexandre Lillo, who co-hosted with an educational specialist a workshop for the department early this year aimed at explicating generative AI and outlining its potential and flaws.
Alexandre Lillo, Université du Québec à Montréal
At the University of Ottawa’s Faculty of Law – Civil law department, professors also have the choice of using or not generative AI tools, for now. The university established a nine-member working group, composed of members of the administration and professors, who are conducting research on the use of the technology in other institutions and assessing the potential ethical challenges posed by the use of AI.
The group is also looking at ways that academic integrity should be addressed in this brave new world as well as privacy and transparency in teaching and research. A report has been drafted, and a final report is expected to be published shortly, said Robitaille. At the same time, the faculty itself set up an AI committee, composed of professors who are AI experts along with other professors “who are very concerned about university teaching,” said Robitaille.
“It’s a committee that will be meeting throughout the semester, because it’s a fast-developing area, so we’re going to have to keep an eye on what’s being done elsewhere too,” added Robitaille. A draft guideline aimed at students and professors is expected to be issued by end of year.
In the meantime, University of Ottawa students that are allowed to use AI tools by their professors must pay heed to the institution’s academic integrity rules. Students must indicate that they have used AI, must cite the specific tool as a source and must explain how it was used. “It’s no different than if students referred to a textbook or a book written by someone else; otherwise, it could be considered fraud,” explained Robitaille.
McGill University also is in the midst of drafting a university-wide strategy. Robert Leckey, dean at the Faculty of Law, declined to comment but pointed towards work accomplished by a working group in mid-June that examined how to integrate generative AI into McGill’s academic mission.
A subcommittee on teaching and learning, established early this year, comprised of professors, administrative members and students, issued two recommendations to the Academic Policy Committee, a standing committee of the Senate charged with making recommendations to the Senate on academic policy. The Academic Policy Committee will be discussing the recommendations end of September.
The working group, underscoring that anecdotal evidence suggests that generative AI tools are “already in heavy use” by the university community, noted that some McGill instructors wanted to make the university a “Chat-GPT-free zone.”
“Given the potentially negative implications of integrating generative AI into the classroom, and indeed into the University culture itself, this desire may be understandable,” said the working group report. “However, even if this were a shared desire, it would be an extremely difficult vision to implement.”
Academic integrity, like at other institutions of higher learning, was a key concern for the McGill working group. The use of generative AI will increasingly influence how academic integrity is defined, and as the technology evolves in the coming months and years, its use will become more difficult to discern “regardless of instructors’ familiarity with their students’ work,” said the report which advises the institution to forgo the disciplinary route. Instead, it suggests to follow UQAM’s pathway and encourage teachers to get to know their students better, and build “assessment practices” that take the use of generative AI into account.
“Our focus is on the integrity of the education itself and not on punitive approaches to the use of generative AI,” said the report, which outlined five principles that should serve as an “operational framework” to integrate the new technology at McGill. The working group recommended, among other things, that educational programming be developed and delivered to staff, instructors and students over the ethical implications of AI generative tools. Instruction should also be provided to help instructors and students to identify biases inherent in generative AI tools, and respect for intellectual property, academic integrity and privacy considerations when using the technology.
The subcommittee also advises that it should be up to professors whether they will use an “approved” generative AI tool for their teaching and assessments, and that students remain responsible for ensuring the accuracy of the information generated by tools such as ChatGPT, and acknowledge its use.
Hugo Tremblay, Université de Montréal
But the educational institution did amend in mid-February its disciplinary rules on plagiarism or fraud for undergraduate students, essentially stipulating that the use of GenAI must be “explicitly authorized.” Meanwhile, it remains for professors and lecturers to decide whether or not GenAI can be used, and how it can be used for teaching and assessment purposes.
“The university is adapting to social change, and the faculty is taking account of the internalization of AI use by the new students who enrol with us,” said Tremblay.
“We are also keeping in mind the professional aspect of the training, which leads students to various fields of practice where AI is already well established or has the potential to become so.”
The Université de Moncton, which runs the only common law program in Canada offered entirely in French, provides succinct guidelines to professors and students. At Moncton, the institution plainly states that the use of AI tools can be “very useful,” and that university is an “ideal time” to learn how to use them critically and responsibly. Instructors are provided with three options, beginning with a formal prohibition of GenAI tools under penalty of plagiarism; limited usage of the technology, permitted during certain assessments in the course and according to instructions given in class; and allowing for its use in assessment but according to guidelines provided in class.
The guidelines, like at other universities, also discourage instructors from using detection software that help determine whether schoolwork is AI-generated. According to tests performed by the university, the detection tools were not effective in identifying the origin of the French texts that were submitted to it. Moreover, the university is concerned that submitting a student’s work to AI detectors may violate their privacy. There are also concerns that since AI technology is evolving very rapidly, detection tools are in a losing race to continuously catch up with the latest developments.
The Université Laval in Quebec City, which has invited the university community “to start exploring” the possibilities and limitations of ChatGPT in its teaching, learning and assessment activities, is equally laconic in its GenAI guidelines. “While applications such as ChatGPT pose a number of challenges, they also offer the potential to positively transform the learning experience of students in a variety of ways,” notes the guidelines. Instructors are prompted to discuss the technology “openly” with its students and suggests assignments that require critical analysis. In advice that runs against a growing trend, the university also recommends instructors to experiment with OpenAI’s new ChatGPT text production detection application.
Anne-Marie Laflamme, dean at the faculty of law at the Université Laval, declined to comment.
Nicolas Vermeys, Université de Montréal
Biases seep into algorithms because people write algorithms, choose the data used by algorithms and decide how to apply the results of the algorithms, explained Vermeys.
Numerous studies have identified potentially harmful biases. Facial recognition AI, according to a study published by the U.S. Department of Commerce, misidentifies people of colour more often than white people. Other studies have revealed that natural language processing, which underpins GenAI, demonstrates disability, gender and racial bias.
In an article Vermeys co-wrote, titled The Trials and Tribulations of AI, an algorithm used by Google to answer user questions erroneously declared that former president Barack Obama, a Christian, was a Muslim. The algorithm was not at fault, points out Vermeys. It simply gathered data from the Internet, and was unable to discern between good and bad data. In other words, the effective accuracy of an algorithm is dependent on both the programming and the data. But, writes Vermeys, given the volume of data on the Internet, it may be impossible to “adequately determine and inspect” the data used by the algorithm.
“Everyone has to take this into account,” said Vermeys. “Anyone basing any decision on the use of these algorithms needs to understand that, like any AI algorithm, it has been programmed by people and uses data generated by people. And these people will necessarily have certain biases and these biases will be reproduced in the algorithm. And that’s something you can’t rule out. So we absolutely have to take this into account and say that perhaps in some cases the tool can be very useful, and I’m not questioning the usefulness of these tools, but we also have to understand their limits.”
French-language users face other conundrums. The proportion of data scraping conducted by the organizations behind GenAI of French-language websites pales in comparison to English-language counterparts. Or put otherwise, GenAI tools are trained mainly on American and English-language sources, which means that it is not as effective and there is “much greater risk of error” seeping into the French-language results generated by the nascent technology.
The latest version of ChatGPT, for instance, outperformed most law school graduates in an American bar exam in an experiment conducted by two law professors. In stark contrast, ChatGPT performed dismally at a Quebec Bar School exam, obtaining a final score of 12 per cent. “Tools need to be developed and trained using local data,” said Vermeys, who is writing a book on the intersection between civil liability and new technologies. “That’s where Quebec faces a serious problem. The quantity of information to train our algorithms, to provide us with quality information, simply isn’t there.”
Ironically, French-language users may be spared with an issue that Canadian English users could face – the Americanization of Canadian law. The prevalence of American firms in the AI sector, which dominate legal IT systems and databases, coupled with the similarities between the common law system in English Canada to American law could lead to a risk of Americanization, said Vermeys. “There are obviously many more similarities between English Canadian law and American law than between Quebec law and American law, so there is a risk of Americanization, if we rely on (GenAI tools) blindly,” remarked Vermeys.
The impact that generative AI tools may have on Quebec’s civil law is something that the Université de Montréal’s faculty of law is now examining, said vice-dean Tremblay. The faculty’s research centres and specialists in civil law and the law of new technologies are at the “forefront of deliberation” provoked by generative AI in civil law. “These specific issues are still in the process of crystallizing,” said Tremblay.
While French-language law faculties and universities are trying to figure out the best way to deal with GenAI, other legal actors also want to have a say in the development and implementation of AI policies in the milieu.
Finn Makela, Université de Sherbrooke
The Association of McGill Law Professors (AMPL), which was certified a year ago, also intends to eventually look into the matter, said Evan Fox-Decent, law professor and head of the AMPL. “We’re in the midst of putting together our monetary proposals for collective bargaining and contending with three separate bits of litigation,” said Fox-Decent. “We should be looking at this, but bandwidth is limited.”
Sèdjro Hountohotegbè, the president of the Association of Quebec Law Professors (APDQ) and law professor at the Université de Sherbrooke, said that “the issues of accessibility, use and impact of generative artificial intelligence (GenAI) tools on the academic world is very topical.” He added that “it is an issue of concern to its members,” and that the APDQ may set up a working development “depending on developments on this issue in the future or depending on the topicality of the issue for APDQ members.”
In the interim, in spite of the well-documented failings of GenAI such as “hallucinating” or yielding results that are falsehoods, many law professors are enthusiastic about working with the tool. Even though ChatGPT and its ilk are notoriously poor as a legal research tool, some instructors believe that its shortcomings may prove to be a rich pedagogical tool for teaching.
“I personally use it, for example, to create certain practical cases or scenarios,” said Lillo, an environmental law professor. “I also used it for assessments. I asked ChatGPT to generate a description of a particular legal system and I asked the students to correct the mistakes, and I got feedback from people who said they really enjoyed this type of exercise.”
Vice-dean Robitaille at the University of Ottawa will not be amongst those who will change his teaching approach, at least in the short term even though there are “undoubtedly teachers who will start to use it, either to show its limits or to explain the ethical aspects.”
He is considering using ChatGPT as a pedagogical tool to demonstrate to students that it cannot “totally” replace the work of lawyers. He believes that it is very important for law students to think for themselves, to learn how to analyze complex rulings, and to draft legal documents “which is not easy at first when you're starting out.”
But in spite of his concerns over the rapid development of GenAI and the impact it may have on teaching and learning in higher education, Robitaille believes that it would be naïve to ban the usage of the burgeoning tool in university circles. “It certainly raises certain concerns, but we have to guide our students in the ethical use of the tool,” said Robitaille. “We have to be cautious about using it. It’s a fast-moving tool, and we need to teach our students to stay on top of things, to always have the reflex to double-check what ChatGPT says.”
General positions of universities or resources issued by universities for instructors or students
General position of the Université de Sherbrooke on the contribution of artificial intelligence to education in all disciplines.
Université du Québec à Montreal (UQAM) – Resources for professors
Université Laval – Resource page for instructors
Université de Moncton – Resource for students and instructors
McGill University - Recommendations
University of Ottawa – Online resources on AI use