<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

How can the ALEXA 35 achieve 17-stops of dynamic range?

Image: ARRI.
3 minute read
Image: ARRI.

Arri's recent announcement of the ALEXA 35 raises a few questions, such as how 17-stops of dynamic range is possible. Phil Rhodes is your guide to the magic, and maths!

Today’s article is brought to you by the number 17, a figure that’s become famous among camera people since Arri’s recent announcement of the Alexa 35 and its reported 17-stop dynamic range.

Let’s be clear just how big a claim that is. To recap the basics, each stop represents a doubling of light intensity. As such, if we have a camera that can detect a given number of photons per photosite before clipping to white, and if we decide that’s ten stops of dynamic range, if we add one more stop, that’s not ten percent more photons; that’s one hundred percent more photons. Add another stop, and that’s another hundred per cent of what we just had. Adding stops is hard.

The other problem is that no sensor can reliably detect one photon, because of random noise. The energy released by a photosite is on the order of a few thousand electrons. By comparison, one amp of current implies 6.2×1018 electrons per second, and the tiny pulse of electricity coming out of a photosite is discharged in a lot, lot less than a second. We might demand 4K at 60 frames per second, for a total of something like 720 million pixels per second. They’re spread over more than one set of output electronics, but they’re tiny signals that fly past incredibly quickly.

ARRI ALEXA 35 still grab.
Whatever the maths, the ALEXA 35 produces a lovely image quality.

So, let’s say a sensor can accommodate 10,000 electrons per photosite (most can handle more). It doesn’t have a dynamic range of 10,000 to one, because it can’t reliably detect a single electron. It can probably see a handful of electrons reliably, so its dynamic range might be 2,500 to one. Reducing these noise levels is a valid way of increasing both dynamic range and sensitivity, which is why a lot of modern cameras require high sensitivity in their highest dynamic range modes.

Camera sensors and decibels

An electrical engineer would look at this in decibels. Decibels are, like f-stops, a logarithmic system, so that 6.02dB is equal to double the signal level. Thus, a 17-stop camera has a dynamic range of 17×6.02 = 102.34dB. That’s intimidatingly huge, considering that the noise generated by a simple resistor, as heat, radio signals and cosmic rays collide with it, is not that much less in the sort of circumstances we’re talking about.

(Strictly, a resistor’s noise is 4kTRB, where T is for temperature, which is why really critical electronics are sometimes doused in liquid helium, and k is Boltzmann’s constant, and... anyway, adding stops is really hard.)

Dual gain

So, the electronics which take the tiny photosite signals and turn them into digital numbers must handle tiny signals with minimal noise very quickly. Building these electronics to accommodate 102dB of dynamic range without requiring that tank of liquid helium is bordering on magical thinking. Building two, each of which is capable of 60-70dB with different amounts of amplification, is much more plausible; we call this dual-gain. Think of it as shooting a two-exposure HDR in one hit, which the camera’s electronics can reassemble into a single image with lots of dynamic range.

ARRI ALEXA 35.
Image: ARRI.

Noise reduction

The next tool in the toolbox is discussed only in hushed tones among camera specialists. Modern cameras almost invariably use some degree of digital noise reduction, although alternative terminology may be employed to soften the blow. Done properly – that is, with a very light touch – there is no reason to object, and it’s almost certainly the source of the “textures” feature the company has already described. Noise reduction involves engineering compromises between the visibility of artefacts and the amount of noise removed, and different textures will naturally result from various different but equally reasonable choices.

All of this has been used on a lot of cameras for many years. The unknowable factor is that sensor design is in a constant state of fundamental advancement. For instance, it’s been a goal of sensor designers to completely separate the photosites from the processing electronics and connect them together with a through-silicon via for each photosite. In mid-2022 that is not common, if it’s being done at all, but designs which are to some extent stacked are now everyday and tend to free up more of the sensor to actually be sensitive.

Certainly Arri has an admirable reputation for conservatism in its specs and it’s likely to have spent as much money as it reasonably needed to spend in order to get access to every trick in the book. As to how much difference this makes to actual cinematography, it remains to be seen, and there are questions to ask about colorimetry and the overall look of the thing, too. In any case, it’s likely to be a bleeding-edge design, as it needs to be in order to see the company through another ten years of apparently effortless market leadership.

Tags: Technology Cameras Sensors

Comments