RedShark Replay: It’s a great pity that in order to enjoy the benefits of digital imaging, we must use pixels that may only be one of a comparatively small selection of colours, as opposed to the effectively infinite subtlety of nature. Phil Rhodes spreads light and understanding about quantization and noise.
This is a type of quantization, the shortcut that’s intrinsic to digital imaging, where we take something that’s infinitely variable (such as a colour) and sort it into one of several pre-defined categories.
Do this too harshly, and a sunset sky becomes a series of very roughly horizontal stripes, with wide ranges of oranges and reds reproduced as just one block of colour. The worst common case is that of a cheap computer monitor, which in some cases use only one of 64 levels (a 6-bit signal) to represent each of the red, green and blue components, resulting in a total palette of only 242,144 different colours. That may be OK for Word and Excel, but it’s clear that more is needed for moviemaking, when post production essentials like grading may exacerbate the problem.
Inside the camera
Almost all cameras use comparatively high bit depth internally, so they can cleanly perform technically necessary normalisation - taming the “raw” output from the sensor - as well as to give the user creative image controls. Even so, the output of the camera – be it a cable or a card – is invariably at either eight or ten bit, with few exceptions. An 8-bit camera outputs pictures where the RGB values are quantized to one of 256 levels, whereas a 10-bit camera quantizes to one of 1024 levels. Considering that there are three colour channels, this means that an 8-bit camera 24 bits per pixel and can represent any of 16,777,216 discrete colours. A ten-bit camera can represent any of 1,073,741,824 colours – that’s over a billion.
So, that’s straightforward: more is better? Well, not always. There are two confounding issues here, the first one of which, noise, is widely overlooked.
Noise is all important
All cameras produce noise; a variation in the image which has nothing to do with the amount of light coming through the lens. Before “film” people get smug, grain is also a variation in the image which has nothing to do with the amount of light coming through the lens, and is therefore also noise, no matter how popular it is. But in the case of a digital camera, if that random variation is more than one one-thousand-and-twenty-fourth of the maximum value of the signal, then there isn’t much point in recording it as ten bit – the extra precision is wasted in recording a largely random fluctuation. And this is the case in effectively all modern video cameras: 1/1024th of the signal level is equivalent, assuming a linear environment, to a noise floor of a bit more than –60dB, which a lot of cameras fail to achieve. Canon claim the C300 sensor has a maximum potential dynamic range of 72dB in its green channel, although this drops to 54dB in most practical use cases.
This isn’t to say noise is bad. Techniques that would cause visible quantization errors can sometimes be mitigated – at the cost of visible graininess – by imposing a carefully-measured amount of randomly generated mathematical noise on the problem area. This technique is called error diffusion dithering, and in many cases, recording beautiful uncompressed 10-bit pictures is just a very laborious way of applying error diffusion dithering so that grading doesn’t appear to cause as much damage. Audiences tolerate noise more readily than quantization artefacts.