Most previous attempts at re-applying one person’s facial expressions onto another have suffered from limitations such as extensive artefacts that give the game away, or they have only worked with a static face. Deep Video Portraits is different in that it is not only capable of replacing someone’s facial expressions and mouth movements, but it will transfer the imposter’s head movements as well.
The result is totally convincing. The video below demonstrates a number of examples, from Theresa May through to Vladmir Putin, with everything including head, eye, eyebrow and mouth movements transferred from the imposting actor. It is so in depth that it can even manipulate background shadows.
To further emphasise how good the system is, there is even an example where the researchers have transposed the expressions and movements from the original video back onto itself, the result being pretty much indistinguishable from the original footage.
And that, clearly, might be a problem as we move forward. The potential for fraudulent video clips is obviously rife, so the prevalence of these sorts of systems might necessitate some sort of way of security checking video in the future to know if it as been modified. However, while such accurate manipulation will raise questions of authenticity in the future, such systems do have benefits.
Whilst film aficionados like watching films in their original language, for some people dubbing rather than subtitles is much more acceptable. But there is always the problem of non-existent lip synch when a film is redubbed. So a system such as this could make multi-language versions of films much more acceptable.
The system works by having an AI system training itself from footage of head and facial movements. It can do this by analysing as little as 2000 frames. The final output is rendered using an Nvidia GeForce GTX Titan X GPU.
Acting in post
It offers potential, too, for the sort of nightmare actor who likes to tune their performance in post. By the same token it also allows directors the potential freedom to modify an actor’s performance. This might not go down too well with unions, but it is on the cards none-the-less.
That said, the potential for abuse is still there. And in a world where the talk is of fake news, and where facts are disregarded on an almost industrial scale, technology innovators possibly do need to tread carefully. Because if we reach a point where we cannot trust that even a video recording of a politician saying something is real any more, then we really will be on a slippery slope.
Luckily the creators of the Deep Video Portraits system are well aware of the potential for abuse. They claim that they have made the work public so that we aware of just what is possible now. They have a point. This sort of system will be developed regardless of whether we know about it or not. If we aren’t aware that it is even possible to do effectively, we could be duped even more easily. They also call for additional security such as the implementation of invisible watermarking to ensure authenticity of video.
Such comprehensive fakery is a long way off, however. Deep Video Portraits cannot cope with the upper body or with highly complex backgrounds, particularly if they are moving. But at the rate these developments take place it surely can’t be long before such a system does cater for these aspects.