The liar’s dividend: how are deepfakes impacting the justice system?

The recent webinar hosted by the American Bar Association, “The Impact of Deepfakes on the Justice System” explores the world of deepfakes and their implications on law and ethics.

Chosen for their unique expertise and perspectives on AI and the law, as well as a deep knowledge of the broad and complex issue of deep fakes from both technical and legal perspectives - the speakers included:

  • Research Professor at the University of Waterloo School of Computer Science - Maura Grossman

  • Retired Judge of the US District Court of Maryland and Director of the Duke Law School Bolch Judicial Institute - Rt. Hn. Paul Grimm

  • Digital Forensics and Misinformation Expert at the University of California, Berkeley School of Information - Professor Hany Farid

Professor Maura Grossman introduces the "liar's dividend": where the existence of deepfakes makes it easier to cast doubt on genuine events. As a result, there is now a landscape where truth becomes plausibly deniable - and deepfakes undermine public trust.

In the context of misinformation and manipulation, deepfakes could be used to fabricate false narratives, potentially influencing public opinion and even election outcomes. The ethical responsibility of technology developers and users was a key point in this discussion, with a call for the development of ethical guidelines and standards to govern the use and distribution of deepfake technology.


The technological arms race in deepfake detection

There is a crucial need to maintain the integrity of digital content.

Professor Hany Farid emphasised how the rapid advancement in AI and machine learning technologies has led to increasingly realistic deepfakes. This progress presents a challenge for detection techniques, which must constantly evolve to keep pace (like a meta-game of deep-fakes whack-a-mole).

Getting to grips with the technological arms race between deepfake creation and detection means considering the reactive versus. proactive techniques - something which Farid calls for further ongoing research and development around.


Can you guess the fake?


The “deepfake defence” and its legal challenges

Judge Grimm brought up the emerging “deepfake defence” in legal cases, such as a particular Tesla case or trials for individuals involved in the January 6th riots. In these instances, defendants attempted to refute evidence by claiming it was a deepfake – essentially arguing that the evidence presented against them was not real.

Cross-examination in trials also gets caught in the crosshairs: if the mere suggestion that evidence could be a deepfake is raised to a witness, it could plant seeds of doubt - complicating the judicial process (though there might not be a factual basis for claiming this).

This new form of defence presents a unique challenge for courts, which now must grapple with the possibility of digital evidence being questioned on authenticity. What is a potential solution that doesn’t cross possible boundaries?


Future of watermarking and authentication technologies

What about adding a watermark to deepfake technologies?Such as the implementation of the Content Authenticity Initiative (CAI) and the Coalition for Content Provenance and Authenticity (C2PA) in cameras.

Professor Farid discussed the current state of watermarking in generative AI - a move to address ethical, copyright, and authenticity concerns which focus on embedding detectable marks in AI-generated content. He noted Adobe's adoption of this system and the ongoing efforts to bring other big AI players on board. If body cams, dash cams, CCTV and similar devices were C2PA compliant, it would help conquer the battle, which he believes, lies more in implementation than the technology itself.

However, integrating C2PA into devices like smartphones is still years away, despite the specifications being out and the technology being developed.


Maintaining integrity in the age of digital deception

As the discussion made clear, there is a pressing need for advancements in detection and authentication technologies, legal reforms and education.

CCTV, and now body-worn camera footage, is the de facto evidence standard and is held up as objective truth in court. This assertion is now under attack. Justifiable doubt from witnesses, victims and even law enforcement practitioners is being cast on the authenticity of image, video and audio files.

The key to preserving the admissibility of digital evidence lies in meticulously documenting its source, how it was captured, and the subsequent access and editing history. Tools for manipulating digital evidence are now in the open, are relatively simple to use, and are constantly improving. Digital Evidence Management Systems (DEMs), Multimedia Redaction Platforms (MRPs), and forensic analysis tools must adapt to this changing landscape.


Now digital manipulation is not just possible but increasingly accessible, the judicial system must evolve to recognise and address these challenges. The insights shared in the webinar underscore an urgent need for a comprehensive strategy that encompasses technological innovation, legal reform, and education to navigate the complexities posed by deepfakes and other forms of digital manipulation.

A huge thank you to the knowledgeable speakers Maura Grossman, Professor Hany Farid, and the Rt. Hon. Paul Grimm for highlighting the current challenges that deepfakes are already having on the judicial system.


Take the lead: protect the sensitive data you handle with Secure Redact today.

Previous
Previous

When your health data isn’t private: a deep dive into the world of data brokers

Next
Next

Are Housing Associations risking resident safety with poor data protection?