<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Going Digital: How to preserve the essence of analogue in a digital world.

6 minute read

In this exclusive series of articles, we interview Graeme Nattress from RED about the subtle yet fascinating science of digital image creation. Read on for a masterclass in how the magic is woven.

Read part 1 and part 2 of this series.

We live in an analogue world. Things that we find beautiful exist resolutely in the analogue domain. A rose; a sunset; a pleasing face: it's hard to understand intuitively how the essence of these images can be recorded and reproduced as a string of numbers with discrete, rigid values.

And yet we do record, process and reproduce the natural world digitally, at a very high level. Most of us know the basic theory: if you sample frequently enough and with sufficient depth per sample, there's nothing lost "in the gaps" because there's nothing there significant to lose. In other words: sample well and you'll capture everything you need for faithful and accurate reproduction.

Nobody seriously disputes this. This is not like the discussion about vinyl vs CDs. This is about how far you can go to record images at the very sharpest end of current technology. And the answer is very far indeed.

I wanted to ask Graeme Nattress, whose job title is “Problem Solver” at RED Digital Cinema, what he thought about the process of moving from analogue to digital. What are the issues, and what are the solutions?

(“Problem Solver”? That’s probably a very apt title for Graeme. But it doesn’t shed much light on the sort of problems that he solves. In any other company, his main job title would probably be “Chief Image Scientist”).

David Shapton, Editor-In Chief, RedShark Media (“DS”): The idea of sampling - that is taking a measurement at fixed intervals to capture a scene - seems to be unsympathetic to the notion of a smooth and contiguous analogue world.

Graeme Nattress, RED (“GN”): We have always sampled in the temporal (time) domain because we have a shutter speed, and discrete time samples in the sense of one frame following another. We take a fixed number of pictures per second to make a video. Shutter speed is sampling: analogue sampling. We've always done it. It's intrinsic to making a film. So it's less of a conceptual leap to imagine that we also sample in the spatial domain. What I mean by this is that on the one hand you have frames - that's your temporal sampling - and then on the other, you have to sample what's in a frame, vertically and horizontally. Those spatial samples are pixels, and each has a number representing a colour. The more bits per pixel, the greater the colours we can record and represent. And of course, not just colours, but dynamic range too.

With enough frames per second, a large enough number of pixels, and enough depth per pixel, we can very accurately reproduce analogue images.

DS: And we do indeed get artifacts that are are familiar to us in the digital world back in the analogue domain too, don't we? The reverse movement on wagon wheels for example, is straightforward temporal aliasing? And moire: sometimes you see a fence through another fence. Presumably in digital one of those fences is replaced by the grid-like structure of the sensor?

GN: Reverse wagon wheels is the classic example of temporal aliasing, but it also manifests as “stutter” on pans too. What the ASC data tables on panning speed are telling you is what speed to use to avoid temporal aliasing on movement.

When I was very young, I was fascinated by the aliasing patterns you’d get on motorway bridges where the metal grille on one side of the bridge would interfere with the metal grille on the far side. As the car drove toward the bridge, the size and shape of the moire would grow and increase. Of course I didn’t know what the effect was, or what it was called, but it caught my interest.

Having two repeating patterns overlaid certainly makes aliasing easy to see as a moire pattern, but it’s not a necessary condition for aliasing to occur. Aliasing can occur in any sampled system (and although most obvious in uniform sampling, randomizing the samples doesn’t eliminate aliasing, only disguising it) and you can see that when you put your eye to a screen door. The entire scene will be aliased, but you’ll see it most clearly on sharp edges in the scene. If you have another screen door you’ll see the effect of aliasing as a moire pattern.

In digital video we have the grid-like sampling structure of the sensor, and that’s why we have to concern ourselves with aliasing.

WEAPON WITH MONSTRO 8K VV_9.jpg

Producing a sensor is one thing. Getting it to produce an artisically pleasing image is quite another, something RED spends a lot of time achieving

DS: I think it's important to remember that sensors are analogue devices. They're there to create a "signal" that represents the image, but the output from each photosite has to be fed into an analogue to digital converter before it can be worked on digitally. What aspects of a sensor are important in the analogue domain?

GN: Dynamic range is important in the analogue domain. When you look at the different sensors that RED has produced over the years, the dynamic range has generally increased: sometimes in the highlights, and sometimes in the shadows. In the shadows, there's how dark the detail can be while you can still see it, and also a change in the noise characteristic as you descend into the shadows. There's a textural quality to the noise. Look at the new MONSTRO and HELIUM sensors. MONSTRO is seeing deeper into the dark areas and the mid-tones are less noisy, so rather than just look at noise in the dark regions, you need to consider noise across the whole luminance range.

Our perception of noise is like integration under a curve, if you understand mathematics. Noise exists at every brightness level, although when working with raw data and no pre-determined rendering intent we do need to be concerned with noise at all brightness levels as we’ve not yet determined which levels in the resulting image will be important, or at what contrast they’ll be displayed.

This is where people have problems comparing sensors and cameras from different generations: they're not comparing like with like. And if the noise performance of the end points increases but the middle it doesn't, then you might not actually notice the improved dynamic range.

This is often an issue with raw development (raw processing in other words) where if you don't present the dynamic range to the user in a way that they can actually see it (even though it might have increased), it’s not visible to them. You have to make it visible. HDR is going to help with this because a lot gets hidden in the rolloff in SDR (Standard Dynamic Range), but without the need for as much rolloff in HDR, you have much more artistic freedom (and you're less likely to be bothered by noise). Noise and HDR is a complex relationship. With SDR, any highlight noise is lost as the highlight rolloff pulls all the contrast out of the brightness as they get squished down to fit in the limited SDR DR. Therefore, the lack of contrast in the highlights will make any noise there practically invisible.

With HDR, the whole point is that we can have contrast in the highlights by avoiding the need to excessively roll them off. Highlights will still need rolling off as it’s normal for cameras to produce highlight information beyond that which a typical HDR display can show. However, our human visual perception characteristics are such that is still hard for us to see such noise highlights on HDR, although they don’t get the extra help that would occur with the dynamic range compression for SDR.

Ultimately, the area of the sensor is important. Noise performance is integrated across area.

It's traditionally said that more pixels means worse pixels, but with a big area the noise performance actually improves. Pixels are also improving all the time. You can’t just say that a smaller pixel is worse than a larger pixel. Each and every pixel design (and hence sensor design) has its own particular properties.

WEAPON WITH MONSTRO 8K VV_13.jpg

There's much more to dealing with noise than meets the eye - RED Weapon MONSTRO VV seeing into the dark

DS: What happens in low light: what is the cause of noise? (I.e. is it the scarcity of photons arriving on the image sensor? Are we actually seeing each photon as it arrives?)

GN: Several causes: photons, read-noise, fixed pattern noise, etc. Noise is a complex, multifaceted issue. It's not just a one-dimensional quantity. My feeling is that with noise it's like peeling an onion: strip away the biggest noise source and there will be another that has been hidden by it - but of course it will be lower in intensity. You do whatever you can to minimise noise and because there are different types you should always try to ensure that whatever noise remains is random. Just to clarify: it’s not just the amount of noise that matters, but the character of the noise that can make it visible. A greater amount of random noise is vastly preferable to a smaller amount of noise that has a character or pattern to it that can make it more distracting to the eye.

DS: Sensors are monochrome natively. But do they have different sensitivities to different wavelengths of light?

GN: They have a spectral sensitivity. Silicon sensors are generally blue deficient and more and more effective into the red. And they're even more efficient into the near infrared, before dropping off.

I remember when I was testing Neutral Density Filters (NDs). Typical NDs have no IR cut and thus can be thought of as IR pass filters. (Although today, filter manufacturers understand how IR can negatively affect the digital image and produce NDs with in-built IR cut.) I’d taken the OLPF off a RED ONE and thus removed all IR cut from the optical system. Holding up the ND filter to my eye it was dark glass like we normally think of an ND. Then I moved the ND in-front of the camera and because the camera in this state had had no IR cut, the ND looked to the camera like a piece of transparent glass.

Low level IR contamination is insidious. It bleaches the colour out of green. It's like a haze in the image. It's very important to block spurious IR, as you can imagine.

Tags: Production

Comments