RedShark News - Video technology news and analysis

RedShark Summer Replay: 8 bit or 10 bit? The truth may surprise you

Written by Madeleine | Aug 23, 2013 2:00:00 PM
Quantized Rose

Redshark's only 10 months old, and our readership is growing all the time. So if you're a new arrival here you'll have missed some great articles from earlier in the year

These RedShark articles are too good to waste! So we're re-publishing them one per day for the next two weeks, under the banner "RedShark Summer Replay".

Here's today's Replay:

 

8 bit or 10 bit? The truth may surprise you

It’s a great pity that in order to enjoy the benefits of digital imaging, we must use pixels that may only be one of a comparatively small selection of colours, as opposed to the effectively infinite subtlety of nature. Phil Rhodes spreads light and understanding about quantization and noise.

This is a type of quantization, the shortcut that’s intrinsic to digital imaging, where we take something that’s infinitely variable (such as a colour) and sort it into one of several pre-defined categories.

Do this too harshly, and a sunset sky becomes a series of very roughly horizontal stripes, with wide ranges of oranges and reds reproduced as just one block of colour. The worst common case is that of a cheap computer monitor, which in some cases use only one of 64 levels (a 6-bit signal) to represent each of the red, green and blue components, resulting in a total palette of only 242,144 different colours. That may be OK for Word and Excel, but it’s clear that more is needed for moviemaking, when post production essentials like grading may exacerbate the problem.

Inside the camera

Almost all cameras use comparatively high bit depth internally, so they can cleanly perform technically necessary normalisation - taming the “raw” output from the sensor - as well as to give the user creative image controls. Even so, the output of the camera – be it a cable or a card – is invariably at either eight or ten bit, with few exceptions. An 8-bit camera outputs pictures where the RGB values are quantized to one of 256 levels, whereas a 10-bit camera quantizes to one of 1024 levels. Considering that there are three colour channels, this means that an 8-bit camera 24 bits per pixel and can represent any of 16,777,216 discrete colours. A ten-bit camera can represent any of 1,073,741,824 colours – that’s over a billion.

So, that’s straightforward: more is better? Well, not always. There are two confounding issues here, the first one of which, noise, is widely overlooked.

Noise is all important

All cameras produce noise; a variation in the image which has nothing to do with the amount of light coming through the lens. Before “film” people get smug, grain is also a variation in the image which has nothing to do with the amount of light coming through the lens, and is therefore also noise, no matter how popular it is. But in the case of a digital camera, if that random variation is more than one one-thousand-and-twenty-fourth of the maximum value of the signal, then there isn’t much point in recording it as ten bit – the extra precision is wasted in recording a largely random fluctuation. And this is the case in effectively all modern video cameras: 1/1024th of the signal level is equivalent, assuming a linear environment, to a noise floor of a bit more than –60dB, which a lot of cameras fail to achieve. Canon claim the C300 sensor has a maximum potential dynamic range of 72dB in its green channel, although this drops to 54dB in most practical use cases.

This isn’t to say noise is bad. Techniques that would cause visible quantization errors can sometimes be mitigated – at the cost of visible graininess – by imposing a carefully-measured amount of randomly generated mathematical noise on the problem area. This technique is called error diffusion dithering, and in many cases, recording beautiful uncompressed 10-bit pictures is just a very laborious way of applying error diffusion dithering so that grading doesn’t appear to cause as much damage. Audiences tolerate noise more readily than quantization artefacts.

 


And then there's compression

The second problem is compression - or, rather, the result of it: another type of variation in the picture which was not motivated by light from the scene, that is. It’s another kind of noise, and the finer the quantization the more likely it is that this noise will be significant enough to alter values. When a high bit depth image is subjected to moderate or heavy compression, there’s a degree of doubt as to whether the high-precision values in the output video have much to do with the light in the original scene.

There is one more significantly confounding factor with bit depth, and that’s gamma encoding. There’s increased awareness of this with the popularity of logarithmic-like curves to modify the luminance values of the signal when it’s at a high bit depth in the camera’s electronics  (often 12, 14 or 16 bits), then reduce the bit depth for recording. This is a legitimate technique that makes best use of that limited number of colour categories we discussed earlier to encode the image in a way that minimises the limitations of the technique. Even so, talking of “log” as if gamma encoding were a new idea distracts from the fact that there has always been a significant amount of processing between sensors and displays, because neither have never been anything like linear. Double the light does not equate to double the signal level at any point in the chain. It is difficult to precisely evaluate this as most cameras produce output that has been hand-tweaked by their manufacturers for the best results, and it will certainly have an enormously significant effect on noise, precision, and quantization issues.

In a practical sense, if you’ve bought a mainstream, 8-bit professional camera, you should be happy. You probably wouldn't notice any difference if it was 10 bit.