<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Resolution: We discuss Pixel Power with RED's Graeme Nattress

6 minute read

Continuing our series of interviews with RED's Graeme Nattress, we cover the topic of camera resolution. Prepare to go to camera university!

See part one (Introduction) here

While it’s obvious that cameras have improved because of increased resolution, what’s not so clear is what the maximum ideal resolution should be. In the early days of digital cameras, the pictures were terrible compared to film. As the resolution improved, pictures got better.

Today as we approach the end of the second decade of the 21st Century, cameras (still and video) are so good that nobody seriously talks about “going back to film” as a means to improving pictures.

But this doesn’t mean that there’s no longer any point in increasing resolution.

I spent two days in Toronto with RED’s “Problem Solver” (that’s his job title!) Graeme Nattress. For ten years, RED has been driving resolution upwards, and now has a full range of cameras that shoot above 4K. As the person in charge of the mathematics that underpin all the signal processing in RED’s cameras, Graeme’s thoughts on the subject are uniquely insightful.

DS: Most people understand that with increased resolution, there are more pixels, so there’s more detail. Is this the whole picture, or is there more to it than that?

GN: There is certainly more to it than that. Yes, resolution is the headline figure, but it’s not the be-all and end-all. Let’s deal with a few of the “spin-off” advantages of increased resolution before diving into the more technical stuff.

It gives you more flexibility. As long as the majority of people are not watching in 8K (8K cameras have been available for a while, but we’re at the front end of the adoption curve), we can use the extra “headroom” that the higher resolution gives us for better stabilisation and the ability to recompose a shot in post production.

Now, as we move onto the more technical stuff, remember that we would never post-produce an image if it was perfect. But nothing is perfect. We know we will be using a noisy signal no matter how good it seems. You can always trade resolution for lower noise but you can’t do it the other way.

Noise also perturbs the shape of edges - so it would affect any algorithms that deal with edges - even algorithms that don’t have a spatial component. That means that “production headroom” (i.e. more resolution than strictly needed) is incredibly important. This is of course standard practise in the audio world where CD delivery is 16bit 44.1khz, but production is at 48khz or 96khz or higher, and bit depths are 24bit or higher.

EPIC-W.jpg

DS: Edges are important in resolution aren’t they?

GN: Yes. It’s worth at this point thinking about real resolution and apparent resolution. Here’s an irony for you. When HD was introduced, make-up artists complained that it was making their jobs harder. SD hid a lot of things. But even with HD, the problem was still the lack of resolution. There was more detail with HD but not that much more. The “broadcast aesthetic” meant that the broadcasters applied edge enhancement (i.e. sharpening) in the cameras. THAT was what was making the make-up look bad. The sharpening was needed because although HD increased resolution, it was still soft in its unsharpened form.

When 4K came along, the sensor MTF (essentially its ability to capture contrast in detail) meant that there was no need to add excessive edge enhancement. There was enhancement with VHS as well but there wasn’t enough resolution with that format for the sharpening to “attack” the details.

With 4K and above, there’s enough resolution to respond naturally to images without the brutality of the gross edge enhancement typically used in broadcast oriented gear.

DS: What about noise vs resolution?

GN: Well, of course, noise can happen at all scales. For dither to work, it has to be at the right scale or it can look obnoxious. It works much better at higher resolution. At a high enough resolution it becomes invisible.

Dither is when a special kind of noise is added to a signal to break up any uniformity in the digital steps that would lead to us seeing bands or posterization in the image. This leads us to perceive a signal as a continuous tone rather than as discrete steps. A small amount of noise in the signal will naturally dither an image. Without it we’d see banding in skies on 8-bit video.One of the paradoxes is that noise is perceived as the enemy in making a good looking image, but noise (or the right kind and amount) can be very helpful.When resolution is low, the noise will take on a very distracting appearance. For noise to function well and help the image, it has to be fine enough in texture and without visible pattern.

WEAPON WITH MONSTRO 8K VV_14.jpg

Big sensor advantage. The RED MONSTRO VV shows what it's capable of

DS: Can you relate the resolving power of a lens to the resolving power of a sensor?

GN: Yes. With lenses, MTF isn’t just a measure of detail: it’s a measure of contrast too: Micro contrast, to be specific. So you have to make sure that the MTF of the sensor isn’t a bottleneck. You have to let the micro contrast come through.

Here’s the thing: High resolution lets you see more of what a lens is delivering. All manufacturers are pushing beyond HD and 4K because there are so many advantages of high resolution.

DS: And what about aliasing?

We can’t eliminate aliasing in the visual domain. We can’t make filters with negative lobes (we only have photons - not “darkons”!). So the only way we can push aliasing far enough out of the way is with high resolution.

Let me explain in a bit more detail.

To eliminate aliasing when we sample, we need to low pass filter the signal so that it does not contain frequencies above those which the sample rate can handle. In the audio world, we can make sharp low-pass filters which are very effective at removing those unwanted frequencies while protecting and passing through those we wish to hear. In the camera world, we’re not as lucky as there is no optical equivalent to those electronic low pass filters.

We can make optical low pass filters, but they’re very “slow” and although they can remove the unwanted frequencies that would cause aliasing. They can only do so by removing some of the wanted frequencies and detail too. In practise, we achieve a compromise whereby we eliminate a reasonable amount of the unwanted frequencies so that in normal shooting cases we won’t see aliasing artifacts, but it’s still possible to provoke aliasing through a sufficiently detailed chart shot with precise focus and sharp lens. To further remove the negative aspects of that compromise, moving to higher and higher resolutions means we’re pushing any potential for aliasing artifacts further and further away from typical camera use.

DS: As an “aside”, is it better to have a one-to-one relationship between the pixels on the sensor and the output of the camera?

GN: If resolution is only slightly different, you’ve got to have a very good downsampling algorithm. You need very accurate filtering. I hesitate to put a figure on it because there are so many factors that can affect it but as long as you accept that it’s an unscientific and unproven guess, then I’d say that anything more than 15% greater than output resolution would start to provide benefits through downsampling, and a 100% increase will provide very real and obvious improvements. When you get to 200% (shooting 8K for 2K delivery, for example) is a real eye-opener on what oversampling can do!

But the important point is that in almost all cases it’s better to downsample.

Bayer resolution loss is less important than optical losses due to the Optical Low Pass Filter (OLPF). You can test this for yourself - but I don’t recommend people actually remove non-user serviceable parts on a camera! You can use a test chart, remove the low pass element, and if you demosaic both of them you can see where the losses are - and it’s in the OLPF mainly. You can’t even call it a “loss” because it is necessary (whether it’s a single mono sensor, a 3 chip or a Foveon or one of the many other CFA patterns).

The whole thing with debayering is that using higher resolutions pushes any potential problems out of the areas where it is a problem. Bayer patterns are not so good at low resolutions but become proportionately better as the resolution increases. This was first noticed with DSLRs which had a big sensor: images from them looked much better than HD camcorders. Ultimately Bayer for the latest high resolution sensors makes a lot of sense. It was the key thing with the RED ONE: it was the point where single sensors took over from 3 chip cameras. Bayer losses have faded away with 4K and 8K cameras. You only have to look at how the market has shifted from how it was in the SD days - one chip was “consumer” and bad, 3-chip was professional - to how it is now where all the top cameras that make the best images are single chip. It’s the increase in resolution that has shifted the balance of compromises in favour of a single chip solution.

All these cameras were tied to a delivery resolution up to a point in time, partly because of rigid broadcast standards. Standards conversion was hard and expensive, so it was vital a camera output the precise format needed. Divorcing frame size from acquisition size was an important milestone… It took us back to film, which being analogue could be telecined at any resolution format desired. And of course, that is what allows us to re-enjoy classic television programmes shot on film where the negs can be re-scanned and restored to look far better than they ever did originally.

We have a multitude of display devices. People started watching on PC. This broke the link between broadcast standards and cameras. You can’t scale interlaced images very well (and flat panel displays are inherently progressive). You must output at the display format. That’s why the flexible digital cinema camera that produces the best images it can exists. All of these things had to happen at the same time.

Tags: Production

Comments