The Judicial Conference's Advisory Committee on Evidence Rules said in a report Friday that federal courts have dealt adequately with forged evidence for years under the existing rules of evidence, which "cautions against a special rule on deepfakes."
Courts already require a litigant alleging that digital or social media evidence has been subject to hacking to show some proof of that claim, the committee pointed out.
"And courts have imposed that initial requirement on the opponent without relying on a specific rule," the committee said, though it added a caveat that the current means of authenticating evidence, such as familiarity with a voice, might "need to be tweaked."
The courts are basically saying, "'We don't really think we have to do all that much. All we need to do is trim around or tweak the margins,'" Rebecca Delfino, a professor at Loyola Law School, Los Angeles, told Law360 Monday. "I don't think that's going to do it."
The committee's insistence that the problem of deepfakes is just one of forgery is "an oversimplification," according to Delfino.
It won't be long before any case involving an image or voice evidence is going to be subject to potential allegations that the evidence is fake, Delfino explained.
In fact, such allegations have already been raised in at least one high-profile case involving Tesla CEO Elon Musk.
"And just saying that it's forgery and we've dealt with it and the rules are mostly sufficient doesn't recognize its full impact," she said. "They're being a little too cautious."
The judiciary is right to move carefully before instituting any rule changes, but some change is necessary, according to retired federal judge and Duke University School of Law professor Paul Grimm.
AI-generated deepfakes are becoming cheaper and easier to produce, and the technology is getting so good that even experts have a hard time differentiating between real and fake, Grimm told Law360.
"We've always had fake evidence, but never fake evidence that so many people could produce so convincingly," he said.
The committee's report comes after an April 19 meeting at which it heard from panelists about several proposed changes to the Federal Rules of Evidence to account for AI-generated evidence.
One of the suggested changes was the addition of a new rule — Rule 901(c) — which was proposed by Grimm and professor Maura Grossman, who teaches at the University of Waterloo and Osgoode Hall Law School in Canada.
"If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence," that rule would have read.
The committee rejected that proposal Friday, saying that requiring a litigant to show that a challenged piece of evidence is more likely than not fake placed "too high" an initial burden on the challenging party.
Committee members did say that those challenging AI-generated evidence as fake should have to show some proof, and "should not have the right to an inquiry into whether an item is a deepfake merely by claiming that it is a deepfake."
The committee expressed skepticism, however, that a separate rule is necessary to establish that requirement.
Grossman told Law360 Monday that she was "disappointed" that the committee chose not to adopt her and Grimm's proposal and instead to take a "wait and see approach."
"We believe that deepfakes pose a formidable challenge for the justice system, especially for jurors who may be unduly impacted or prejudiced by such evidence, particularly computer-generated images, audio, and video, which can have profound impacts on human perception and memory," Grossman said.
"We also believe that judges need tools that will allow them to prevent the exposure of jurors to these kinds of computer-generated evidence under circumstances where there is credible evidence that the media is a deepfake," she added.
The committee's report means those judges are not going to get those tools from the judiciary for at least three to five years, according to Delfino.
Delfino proposed changing Rule 901 to mandate that evidence's authenticity be decided by judges rather than juries, who she says studies are beginning to show can't really detect deepfakes.
"Seeing is no longer believing, expertise is no longer credited, and so the idea that this is just really a problem of forgery, that doesn't cut it," Delfino said. "It is so much more sophisticated information, broad, complex for all the players involved."
The committee dismissed Delfino's proposal "out of hand" after discussing it at a hearing in October, according to her.
"I think they are not right about that," Delfino said. "They have seen what they consider to be little waves that they can ride. And I'm looking farther out on the ocean, and I see much larger waves of complicated AI configurations that they haven't anticipated."
Not everyone disagrees with the committee's stance that no rule changes are currently needed, however.
"So far, the courts seem to be handling deepfake issues ably enough, so no rule change seems urgently required," according to Riana Pfefferkorn of the Stanford Internet Observatory, who is a former associate in the internet strategy and litigation group at Wilson Sonsini Goodrich & Rosati PC.
It is worth revisiting the topic in the future, Pfefferkorn acknowledged.
The evidence rules committee may do exactly that. It didn't outright reject all proposed rule changes, but said it intends to continue evaluating the issue, according to Grimm.
Scholars like him, Grossman and Delfino say they plan to continue refining their proposed changes.
The committee is scheduled to meet again on Nov. 8, according to the report.
"The committee remains aware of the challenge of drafting rules that take three years to enact, to cover a rapidly developing area in which three years is like a lifetime," the committee's report said. "The need to avoid obsolescence by the time of enactment requires rules to be general — perhaps too general to be helpful."
--Editing by Alex Hubbard.
For a reprint of this article, please contact reprints@law360.com.