You can get wonderful results with modern image processing. But never forget that every change you make to the original material - actually damages it!
If you've ever been to a lecture on colour grading, you'll be aware that there's an awful lot you need to get right if you're going to not only achieve the best possible picture, but if you're going to maintain the information in the original material.
Of course it's tempting to think of processes like colour grading as improving the quality of a video. When it's done well, it certainly looks like that. Take some log (or "cine mode") footage and you can make it look vastly better by grading it, although strictly, when you grade log footage, your base point is making suitable for the type of display you’re showing it on.
Grading moving images to look teal and orange and like an action movie is relatively easy. Grading stuff to look like untouched reality (which just happened to have been photographed in perfect light with perfect consistency) without making it look excessively affected is very difficult
But what you're absolutely not doing is adding information - except in the very abstract sense of perhaps achieving the director's artistic intent
If this all sounds very abstract, let me be clear about what I'm saying here - and this may surprise you.
Sony Venice: a cutting edge modern camera
It’s all destructive
When you process a signal, it's always destructive. Whatever you do, you're not adding information. Unless you have a perfectly efficient process, which you won't have, then even if you're applying a process which says "keep everything the same", then, except in very special cases, you're degrading the image. (Let’s be careful here not to confuse information theory with the purpose of colour grading).
What are these special cases? They're when you're the digital domain and you transfer the exact numbers without any change. But while this is the normal state of affairs when you move digital media around, the moment you open up the media and process it in any way then you will be damaging the content, even if you think you're enhancing it.
An artificial exception
Another special case is AI processing. AI can make assumptions based on learning, and it can, remarkably, seem to improve an image. But when that happens, something else has to give, and it's what I will call, for now, "authenticity". That's a big subject. We'll come back to it later, in another article.
Being told that virtually any process of "improving the look" of an image actually damages it can be hard to swallow. So let's look at a basic example.
Let's imagine that you've shot some video in a room that's too dark. What do you do? You apply gain. You boost the signal. Unfortunately what you're not doing is boosting the elements of what you want to see and not boosting the parts that you don't want to see. You don't get that choice. You have to boost it all. And that means that any noise in the picture gets boosted as well. You might even find that noise that you hadn't seen before is now visible. The end result is that the picture might be better exposed but you still can't use it because of the amount of noise. Boosting gain doesn't add information to the image. What's missing is the information needed to tell you what's noise and what isn't. If you shone enough light on the scene in the first place, there would have been more information in the image and you wouldn't have needed to apply the processing in the first place.
So far, so obvious
What about focus? If you have a conventional image (ie a bitmap), there isn't much you can do about focus. You can apply sharpening, and that can be effective, but it won't rescue an image that's grossly blurry.
The thing about sharpening
But how does sharpening work? Probably not the way you'd hope it might.
Sharpening, at its most basic, works just like a tone control on a HiFi amplifier. It boosts the high frequencies. If you're not used to thinking of video in terms of frequency, then here's a good example to get you in the right frame.
Most sharpening tools - Unsharp Mask is an example - are smarter than this. This sort of very basic sharpening is only in the basic tools you get with everything for historical compatibility reasons. Any serious toolset will have much better algorithms. But that doesn’t change the basic premise.
Imagine a picture of some vertical black lines. As they move closer together, that's effectively a higher frequency. And indeed if you were to move steadily over the lines with a single-point light sensor, and if you connect the light sensor's output to a loudspeaker (forgive me here for the gross oversimplification!) you'd hear a tone. Moving faster over the lines or moving the lines closer together will raise the frequency of the tone. Moving them apart or slowing down your scanning rate will lower the frequency.
Images have frequencies too
There are frequencies in ordinary objects and scenes, too. Generally, sharp boundaries represent high frequencies. Smooth transitions produce lower frequencies.
So if you boost the high frequencies, it can make an image look sharper. But what this is actually doing is accentuating the edges. It's not adding information. It's certainly not bringing out-of-focus objects back into focus. The best practical proof of this is to sharpen too much, and you'll quickly lose any sense of naturalness. Not only will the edges look unreal, but you'll be boosting elements in the image that aren't meant to be sharp at all.
Wherever you look, you'll find image processing can make a picture look better, but can be pushed to make it look worse too. And each example is actually taking information away.
And this effect isn't reversible. By the time you've processed the "signal" with sharpening, you've already damaged it. You can try "unsharpening", but reducing or attenuating the high frequencies, but this won't bring it back to where it was before. In fact, what will happen is that you'll be taking the already boosted (ie sharpened) image, and then blurring those edges. So the very artefacts that you created by boosting the high frequencies will themselves be blurred and spread all over the place.
And it's not just "effect" processing that can damage an image. It's almost any kind of change to the image signal. This even - or especially - happens when you move from one colour space to another, or from one image format into a second one. The only time you almost completely avoid this is when you move from one space into a bigger one. For example, when you record in 8 bit and move into a 10 bit space. Not only will you avoid damage to your original material, but any future changes will occur at the higher, kinder, resolution of 10 bits.
The raw effect
Raw video, properly managed, is an incredible help here, because it preserves significantly more of the information in the original image. With new and easy-to-use raw containers like Apple ProRes Raw and Blackmagic Raw, this are happy days indeed.
An intelligent future
As we move forward we'll find AI techniques swooping in and "correcting" all kinds of errors. This is perfectly OK. It's what we do with our own perception, except that I like to think that our intelligence isn't artificial.
But there may be downsides. AI artefacts are different. Forget about the blocky, multi-coloured chess boards you get with typical compressors. In the future, instead of someone's face, you might get a teapot.