RedShark News - Video technology news and analysis

Video authenticity is a potentially huge problem to solve

Written by Simon Wyndham | Dec 19, 2018 9:00:00 AM

We have reached the tipping point where we might not be able to trust anything we see. This is potentially a huge problem, and one that needs to be solved if we are to continue being able to trust the images we see on our televisions.

The phrase of the last couple of years has been “fake news”, a term invented by a certain individual to describe, well, fake news stories, whether they are true or not. The intention is to plant a seed of doubt, because regardless of whether you believe him or not, some news organisations have been found to embezzle or make up stories over the years. This will come as no surprise to readers in the UK, used to tabloid journalism and the various court cases we've seen. Untruths often get repeated so many times that despite plenty of evidence to their contrary, a lot of people still believe them.

But while the written and verbal word is something that can be debated and put to rights simply by evidence based facts, it is much, much harder to disprove something that is clearly in front of someone’s eyes.

Recently we’ve seen these videos where AI learns how to replicate full body movement. It is whimsically referred to as “deep fake” technology, and we've seen it before where people in interview settings can be made to say whatever the video creators wish, with uncanny accuracy.

CGI vs reality

We have already reached a point where CGI animation can be made pretty much indistinguishable from reality, and although these new AI generations are not going to fool anyone yet, they will get better.

A predictable response to this would be to say that, of course it will get better, it's inevitable. But we need to be rather more on the ball than this. We know that fakery that is indistinguishable from reality will arrive, and when it does we are in very dangerous territory indeed.

When we see the way in which politics is often being carried out today throughout the world, we need to be incredibly wary about how this technology could be used in the future. You might well say that I am simply pointing out the obvious, and you'd be right. But sometimes the fact that something is obvious doesn't always mean that the problem is at the forefront of peoples minds. In other words we could be lead into a false sense of security by thinking that the potential is so obvious that surely somebody, somewhere is working on it.

But are they? Now is really the time at which some sort of solution needs to be arrived at, but how could it be done.

(Warning, the video below contains profanity but is a good example of where things are right now)

If you look closely you can see the fakery in this video, but as technology gets better, so will the results

Blockchain

There are technologies such as the blockchain based PROVER. PROVER aims to authenticate that video has been taken using a real video camera or mobile device by getting the camera user to create a Swype-code by aligning the video with dots on the screen at the beginning of the recording. The resultant hash code can then be verified later on to confirm authenticity.

There are of course other blockchain based authentication systems for stills photography such as Truepic. And it's an important development because being able to authenticate an image or video will be incredibly important for everything from insurance claims to journalism.

But it will need to be taken further. Currently, for example, Truepic produces a verification URL which allows users to check an image's authenticity. The service goes as far to use wifi signals and even barometric pressure readings to help confirm where a picture was taken. This is all very well and good, but what happens with regard to television news?

There are television channels that we already know exist to operate as government propaganda arms. If for example we are shown a build up of military on a border, or aggression from an army in some part of the world, how do we, as viewers verify what we are seeing? Currently we rely on independent investigative journalists to offer evidence of what is really happening for instance, and we place, or at least assume trust in them. But in a world where there is no black and white, only shades of grey, it will become increasingly difficult to know who is really working for who.

This is of course starting to border on the need to break out the tin foil hats, so let me bring this back down to earth. As television viewers we will need a real-time way to verify what we are seeing. This may need the verification metadata to be sent to the television set so that it can contact the relevant servers and get an authentication confirmation. But then how do we trust that the metadata is genuine to begin with?

It is a frightening thought that we may end up in a position where such drastic measures are necessary. But we are already in an information war that has lead to previously stable democracies being tested to almost to breaking point. And as technology continues to develop at an incredible pace, as well as becoming much more accessible to all of us, this is simply going to become an increasingly important, and neccesary problem to solve.