LEGAL ALERT: The Growing Threat of Deepfake Evidence in Litigation
An ever-evolving technology, Artificial Intelligence continuously presents new opportunities and novel concerns. One growing concern is the use of “deepfakes” in litigation. Deepfakes are AI-generated or manipulated videos, images, or audio files that convincingly depict events or statements that never occurred. The risk is twofold: litigants may offer falsified evidence or make baseless claims that their opponent has offered falsified evidence. Both scenarios can undermine a jury’s perception of authenticity and increase the time and cost of litigation.
In response, the Advisory Committee on the Federal Rules of Evidence is considering amending Rule 901. Currently, Rule 901 provides that evidence is considered authentic if the party offering the evidence can show sufficient proof that it is what they say it is. The proposed amendment would require (1) the party challenging evidence as fabricated by AI to provide enough proof for the court to find that the evidence could be a deepfake, and (2) if that showing is made, the evidence will be admissible only if the party offering it demonstrates that it is more likely than not authentic. Although the amendment would add important safeguards and raise the evidentiary standard, it would not take effect right away. In the meantime, judges have been applying their own measures and handling the issue in different ways.
In some cases, judges have shown their displeasure for baseless deepfake concerns. For example, in Huang v. Tesla, the plaintiff asked Tesla to admit that a video of Elon Musk was authentic. Tesla refused, claiming that because Elon Musk was well known, admitting to the authenticity of the video increased his potential to be targeted for deepfakes. The court rejected Tesla’s argument because it would set a dangerous precedent, allowing public figures to evade responsibility by claiming that admitting to authentic evidence makes future deepfakes more likely.
Additionally, courts have applied various standards when evaluating the admissibility of alleged deepfake evidence. For example, in USA v. Khalilian, the defense moved to exclude a voice recording of the defendant, claiming it could be a deepfake. After the prosecution argued that non-expert witnesses could testify that the voice sounded like the defendant’s voice, the court remarked that the non-expert witness testimony was “probably enough to get it in.” In contrast, in Wisconsin v. Rittenhouse, the prosecution wanted to introduce zoomed-in iPad video, but the defense objected, arguing that Apple’s pinch-to-zoom feature uses AI that could manipulate the video. The judge required an expert to testify that the zoom function did not alter the video, demonstrating a more cautious approach. [insert space before next paragraph]
Deepfake evidence poses a threat to the integrity of the judicial system. While legislation and rules of evidence evolve, responsibility to handle deepfake allegations falls on others, including judges, jurors, and attorneys. As such, the outcome of issues revolving around deepfake evidence remains unpredictable. If you or your business may be affected or concerned about the possibility of deepfaked evidence, please contact Kessler Collins to assist you in navigating these challenges and protecting your interests.