Christmas Replay: Stating that something in the future will be different to the present is a contender for The Most Obvious Thing to Say of the Year awards. But how it might be different will be beyond all current comprehension, although surprisingly the technology and the concepts that will fuel this change are already here.
I wrote recently about how resolution might not be as important now that machine learning and AI are advancing at such a rate that even old footage from SD cameras can be up-converted into convincing looking 4K. In the future having this sort of processing in realtime inside the camera itself is a given. But there are other, much whackier sounding concepts that could well be on their way, and it's based on technology that is already here.
Remapping actors faces
We know this as Deepfakery, a way of replacing an actor in an existing film with a different one. It can often be done extremely convincingly, too, and it is getting better. Now imagine that this could be done in-camera, live during filming, without the need for a huge bank of computers to do all the calculations.
Why would we want to do this? It's simple. Sometimes low budget actors are, well, low budget. Why would you want Keith Postlethaite, West Midlands amateur dramatics tour de force as your lead star when you could have Robert DeNiro instead?
It's not just the face that realtime deepfakes could help with, either, it could be the voice as well. Now of course this wouldn't be legal under normal circumstances, but actors are already hiring out their likenesses. An actor like DeNiro might well still charge a pretty penny, but you'll save on the cost of a trailer and dedicated staff!
But this does have a serious point. Even on a low budget scale, if you were shooting a film in the UK but needed to have someone from the US, they could still appear in your movie. It also means that actors can effectively be double booked and 'do' two jobs at the same time.
Depth mapping and virtual sets, the end of green screen
Virtual sets are now a reality, but in the future it will be possible to create them in-camera and film as a camera op with augmented reality in the viewfinder.
Really, truly good high resolution depth mapping will make this possible since the camera will be able to effectively map out actors in 3D with no need for a green screen or a dedicated interior studio. These abilities will trickle down into prosumer equipment, yet will probably appear on phones first!
The ability to do this in any location without a truck full of computers will be a game-changer. Want to film in NYC during the winter when you are in the France in the middle of Summer? No problem! Nvidia has already shown demonstrations of converting video taken on a summers day into either a rainy one or a winter one.
This type of advancement will be quite democratising as far as filmmaking goes, because it means that grand and exotic locations are no longer the preserve of having a big budget. This will take a while before it can be done in realtime and be convincing, but it will happen.
Lighting in post is something that used to be a joke, and, well, it still is. However truly high resolution depth mapping opens up a huge amount of potential, not only for re-lighting in post, but also for re-lighting in realtime, virtually. Depth mapping means that you have a 3D representation of the environment, and if it is performed to a high enough resolution changing the lighting in camera becomes a very real possibility.
Why would we want to do this? Well, again, not everyone has the budget for huge HMIs to throw shafts of light through windows. Waiting for that perfect sunset and golden hour costs time and money, particularly in the UK where what might have been a fabulous sunset can quickly become a collection of grey cloud.
Combined with virtual or augmented sets and locations and you could be in full control of everything at all times. Fanciful thinking? I don't think so. As I write this there are greater minds than mine actually working on making this sort of stuff a reality.
Where we're going, we don't need cameras
I've touched on light field technology in articles before, but the biggest change of all will be when the idea of the camera as we know it falls into obscurity. Having some form of high resolution 360 volumetric capture could very well turn things completely on their heads.
Imagine that rather than turning up to a location, setting up your lighting, and then setting up your camera composition, you instead set up a few volumetric capture devices. Composition, camera movement, and lighting would then be completely taken care of in post.
We have to be careful of viewing developments like this through the eyes of our current world, because presently such capture not only uses up a huge amount of resources, but it's tricky to deal with all that data. But as with all things like this, all it takes is for one company to think of a fast and convenient way to do it and then everything changes.
Ah yes, the A word. And it is most certainly a potential concern. What about the skill of the performance? What about the feel of a real location? These are all thing that will get discussed as the technology becomes available. But there's one constant that determines whether this tech gets used or not, and that's money.
If something can be done convincingly enough, and it saves money as well as being convenient to use, it will get used. Top tier actors may still want to give real performances rather than rely on a virtual body double with their face pasted on the front, but for struggling actors it could be the difference between putting food on the table or not.
The other issue is that another major driving force behind developments like this is flexibility, and the ability to do things much more easily that simply are not possible right now. But with flexibility and the ability to manipulate things comes a real threat to authenticity. And it's not just about the idea of authenticity in a literal sense, either. It's also about trust in authenticity on the part of the audience.
We are already heading in the direction of distrust of what we see. If data like this becomes so easy to manipulate that it can be done on a phone, we might quickly reach a point where it is literally impossible to trust what we see.
An example might be a politician saying something. If volumetric data f them was leaked from doing an interview on one place, they could easily be reframed in an entirely new location, saying something completely different. Metadata like GPS information becomes irrelevant in such a scenario, so perhaps we would need some sort of security tag system.
When giving an interview or appearing on camera, each person in that 'scene' would have to sign off on their own personalised security tag. Once that footage is taken it becomes locked without the permission of each person appearing giving their digital permission to manipulate it. This metadata would be made publicly readable, so that if a video appeared on whatever future form of YouTube exists, viewers can check to see if there's an authenticated digital signature from those appearing. Of course such a thing would kill off the idea of fan made content, and it won't solve all the problems thrown up from the perspective of journalism, but the fact remains that we will need some way to authenticate things. Not just for the sake of those appearing, but also to keep the trust of those watching.
Blockchain is of course one of the primary ways that is being investigated currently in order to restore trust in a world of deep fakery, although no single solution has yet been found.
Regardless of just how far things go, there's no use pretending that filming in the future won't be completely different to how it is now. Vast changes will happen that some will feel affronted by. But there's no getting away from the fact that in some form or another this type of technology will be transformative. And if you don't think it will happen, I'm afraid to inform you that it already is in its early stages.