<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Detecting the deepfakes

3 minute read
Shutterstock

A new open standard holds out the hope of being able to verify what is real, even when it’s virtual.

Despite all the skill that goes into creating them, both artistic and technical, it’s not hard to see that deepfakes present a problem. We have long given up on trusting the veracity of the still image, and now the moving image is on the same journey towards suspicion. And for all the entertaining uses that go viral on social media, there are plenty of darker ones that spring to mind; everything from fraud to revenge porn to even the potential destabilisation of entire governments. 

Political misinformation, hate speech, harassment…It’s a problem and it’s a growing one. Talk to any news broadcaster and the issue is right at the top of their agenda. How when you’re in a constant competition with other broadcasters to be first on the air with your news, do you ensure that you’re not being hoodwinked? Is that video of that politician confessing his admiration for Problematic Historical Figure A real? Did they really say that? And is that celebrity really doing that with a member of the Royal Family? 

Newsrooms have to make decisions about such things fast and the volume of deepfakes is increasing all the time, with what are referred to as ‘non-consensual and harmful deepfake videos crafted by expert creators’ doubling roughly every six months.

There is, of course, advice on how to detect them. MIT offers an eight step program, pointing out that there is usually incongruence in some dimension of a deepfake. This is true and we can usually pick out something that is wrong with them even if we’re not always consciously aware of it. But the deepfake technology is improving all the time and there is definitely a certain amount of objectivity regarding steps such as “Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes?”

Of course, you can use the same technology that created the deepfake, usually a form of AI-powered Generative Adversarial Network where two AIs work on the problem together, to detect them. But this can be a) slow and b) ends up working to just make the original fakery better. What is needed is a guarantee of authenticity that can be quickly checked, and that’s where the C2PA (Coalition for Content Provenance and Authenticity) reckons it comes in.

Digital provenance

The C2PA was launched in February 2021 and counts Adobe, Arm, the BBC, Intel, Microsoft and Truepic among others among its founders. Twitter joined a few months later. While there are various theoretical ways of detecting deepfakes — blockchain features prominently in many proposals — the C2PA’s recently published specification is the first of its kind to provide a working technical specification for digital provenance. 

Essentially it provides a series of statements, called assertions, that cover a range of areas surrounding a digital asset such as capture device details, author, edit actions, and more. These are wrapped up into a digitally signed entity called a claim alongside W3C Verifiable Credentials of the individual people and companies involved, and then all bundled together into something termed a manifest by a hardware or software component called a claim generator. This is all cryptographically protected and the basis of making any decision regarding the veracity of the resulting digital file is the identity of the entities associated with the cryptographic signing key used to sign the claim in the active manifest. 

In other words, it provides an exact record of who created the asset and who has worked on it since. As a news organisation, or even as a consumer, you can then decide whether you trust it. 

As an example, say a news cameraperson uses a C2PA-enabled capture device during a newsworthy event they are covering. The assets are then brought into a C2PA-enabled editing application, and after editing it, they are sent from the field to an editing suite back at base. An editor then makes additional edits also using a C2PA-enabled application before the finalised asset is moved into the content management system of a news organisation, which is also C2PA-enabled, and broadcast and/or posted to social media.

At each stage of the journey the manifest can be checked to see that all the companies and people associated with it are trustworthy. And the end consumer can even do that themselves on social media with a C2PA-enabled application. The manifest is hard bound to the asset using a cryptographic key and the W3C credentials are verified, making spoofing, impersonating, and other chicanery if not impossible then at least very, very difficult.

Is it perfect? No. Bad actors can subvert the system, and when it comes to media organisations some are definitely more trustworthy than others; several have been found recently passing off fake news stories that fit their own political agendas. It is also login to need buy-in from a lot of software and hardware vendors in the digital ecosystem. But is it a system that is better than what we currently have? Yes. Because, frankly, at the moment pretty much the only thing protecting us from deepfakes is our own instinct regarding what looks right and what doesn’t. And throw enough computational power at that and even that instinct will be fooled more often than not. 

Tags: Technology deepfakes

Comments