Efficacy versus efficiency: Grappling with AI use in a teaching clinic | Lilian Bahgat

By Lilian Bahgat ·

Law360 Canada (October 7, 2024, 2:16 PM EDT) --
Lilian Bahgat
Lilian Bahgat
Teaching clinics have long prided themselves on ensuring law students learn practical skills while serving the marginalized members of their community. So how do you guide law students in their use of an evolving technology such as artificial intelligence (AI)? And how do you set up guardrails to ensure proper supervision of these tools? This is a story of how a small legal clinic suddenly found itself thrust into the fascinating world of AI and how we used this experience to teach our law students.

Over 50 years ago, law students at the University of Windsor came together to start Community Legal Aid (CLA), interested in honing their legal skills while giving back to their community. As the poverty law clinic for Windsor law, CLA sees hundreds of volunteer law students help the most vulnerable and marginalized in our community each year. As technology changed, the clinic adapted. Orientation included warnings on why “Googling” a legal issue is not proper research, how to note up cases and why social media posts make for great evidence. However, as advanced in our thinking as we imagined ourselves to be, the pandemic changed everything. 

The pandemic forced the legal profession to rely on technology. Courts introduced new filing systems and the opening of the court speeches made it clear that technology was here to stay. Tech supporters encouraged use of AI, pointing to efficiencies. The skeptics urged caution, pointing to the possibility of errors. AI became a hot topic of discussion with many law schools and regulators grappling with the responsible use of this tool.

In 2021 the Ontario Government sought feedback on its proposed framework for the use of AI. The Law Commission of Ontario shared the consultation with the clinic system, inviting clinic participation. In researching our submissions, we discovered the dangers AI use can have on marginalized and vulnerable communities. Those concerns were expressed in our submissions. This research was conducted when the Ontario government was moving forward with its digital-first platform. Our submissions resulted in an invitation to roundtable discussions with the Information and Privacy Commissioner of Ontario (IPCO).

During those discussions, a clear dichotomy was forming between supporters and skeptics. The tension was between efficacy and efficiency. The deeper we ventured into the tension, the more it evolved into our theme. The IPCO invited us back to consult on police use of facial recognition technology and mugshot databases. This consultation process culminated in the publication of the IPCO’s Facial Recognition and Mugshot Databases: Guidance for Police in Ontario.

For three years we were exposed to the potential harms that AI use has on marginalized communities, the risks it poses to the protection of privacy and the improvements it could make to routine tasks. Again, the theme of efficacy versus efficiency took center stage as we contemplated the effect of this technology on our clients and students. It informed our growing concern about how to address AI use at the clinic. The accelerated use of technology required us to rethink how we train law students.  

AI’s introduction into the mainstream ushered in a new wave of risks for the clinic. We noted students’ reliance on it for research and drafting correspondence. This led to our first rule: “Do not use what you cannot correct.” We emphasized the need to know before you go and highlighted cases emerging from the United States first and later here in Canada where lawyers were reprimanded for relying on incorrect and falsified research.

We demonstrated that AI was prone to errors, referred to as hallucinations, which some practitioners found out about the hard way. We discussed the use of the technology, delving into the reasoning others proposed. A seasoned lawyer can note basic errors in AI’s results; a law student may not. As lawyers, we are trained to question everything; our students will get there eventually. 

A second danger that AI use presented was the risk of breaching confidentiality. The introduction of open AI systems required a heightened focus on reteaching confidentiality and emphasizing learning the law before using this technology. We repeatedly found ourselves explaining how the technology works.

Most law students were unaware of machine-based learning and algorithm bias and did not realize that the prompts they enter become public domain. We amended our confidentiality agreements and commitment agreements to address the use of AI in the clinic and on client files. 

In the classroom, we witnessed AI use for research and presentations. Students relied on it to draft answers, formulate slide decks and create social media posts. Early on, staunch advocates encouraged the integration of AI in law school curricula. The teaching profession began contemplating how we assess students considering accessibility to AI. Internally, our university facilitated these discussions, with some educators advocating for a change in pedagogy that emphasizes using technology as a learning tool.

The clinic took a break and reconvened. This led to our second rule: “A tool is only as useful as the hand that wields it.” We encouraged using the technology as a tool to assist in developing the final product but ensuring control and oversight in the final product. Much like a licensee is responsible for the oversight of non-licensees, students must take ownership of the use of technology and responsibility for its final product. We discouraged AI use as a starting point and encouraged going to the source documents instead, whether they be legislation or case law. If AI must be used, all work had to be cross-referenced with pertinent legislation. Case citations had to be noted. Our conversations naturally turned into teaching due diligence.

The overarching theme in grappling with technology in a teaching clinic is ensuring efficacy is not lost to efficiency. While the use of technology can result in the faster completion of tasks, reduced wait times and higher production rates, it requires fail safes and regular audits. As supervising lawyers, we need to adapt to the evolving technology but teach accountability for its effect on our clients. This was our message to our law students.

Lilian Bahgat is review counsel at Community Legal Aid (CLA), where she oversees the civil practice and established key areas of focus including employment law, consumer protection, and elder law. In addition to her legal practice, Bahgat is a committed educator; she has taught at the Elder College, St. Clair College and Windsor Law and was recognized as an Open Learning Digital Fluency Fellow.

The opinions expressed are those of the author and do not reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Yvette Trancoso at Yvette.Trancoso-barrett@lexisnexis.ca or call 905-415-5811.