<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Why are some cameras better at colour reproduction than others?

3 minute read

ShutterstockWhy are some cameras better at colour than others?

If you have ever wondered why all cameras render colour differently, read on.

It's frustrating when experienced camera people with significant expertise, fall back on vague and unsubstantiated rhetoric to describe why a particular camera was chosen for a particular job. That's especially true given how good a lot of modern cameras are. Per Bohler held a fascinating talk called “Single Sensor Video Cameras and the TLCI-2012” at this year's IBC. He happily confirmed that all the cameras tested for the paper produced perfectly acceptable images. Some were more capable of distinguishing similar shades of difficult colours (often teal, navy blue and purple are tricky) and were, therefore, unavoidably, objectively better, but all did an acceptable job. What we're here to discuss is why this happens and what the implications are.

So, why can't some cameras tell what colour things are? Well, when we think about it, there's one overwhelming thing that determines a camera's fundamental ability to see colour. We can mess about with mathematics in the electronics, or in post production if we're shooting raw, but the colour response of a modern, single-chip cinematography camera is almost exclusively determined by the colour filters on its sensor. These are real-world filter dyes that have to be manufactured and applied to the sensor. This places practical limits on what can be achieved, but in practice, they tend to be surprisingly pale colours lacking saturation.

This is in some contrast to the more traditional 3-chip cameras, which use filters that reflect the unwanted colours rather than absorbing them but still suffer practical limitations on what can be achieved. There is some overlap between colours so that the camera can see (for instance) yellow light, which will be seen by both the red and green channels. This is also the case in single-sensor cameras, although the overlap tends to be far bigger. There are two reasons for this. First, with a single sensor designed to see red, green and blue, there are alternating colour filters on adjacent sensor photosites. As a result, there are, of course, gaps to fill in the image for each colour channel. It's easier to fill in those gaps accurately when there's at least some sort of image in, say, the red and blue channels for an object that's really primarily green. The other reason is that deeper coloured filters absorb more light and compromise the sensitivity of the camera.

The downside

What's the downside? Well, simply the fact that if we are looking at, say, a yellow object, it will activate both the red and green channels — and that’s fine. The problem is that if it's a slightly reddish-yellow object it's hard to tell from a slightly greenish-yellow object, because they both activate the red and green channels to a very similar amount. There will be a difference, an imbalance between red and green, which might hint at the real colour of the object — but if the colour filters are sufficiently pale, that difference might be very subtle.

So, when we try to recover a normal picture from a single-chip camera, we fundamentally need to turn up the colour saturation, almost literally, as it would be done in the hue, saturation and luminance filter in Photoshop. If the difference in our yellowish objects, between the red and green channels, is small enough, the distinction between reddish-yellow and greenish-yellow can become so small it's lost in the fuzz and grain of the camera's noise. Turn up the saturation control high enough to see that subtle change in colour and the image becomes unacceptably grainy. Even beyond that, objects which reflect a really complicated combination of colours can look different to the eye (or to one camera) but be indistinguishable by another. Depending on the combination of colours involved, these problems can be genuinely impossible to solve because the camera just can't see the difference.

Still, that's broadly why some cameras see colours better than others (but may have reduced sensitivity,) while some cameras have high sensitivity and dynamic range (but may have less precise colour rendering). Interestingly, the degree of overlap is actually specified in the relevant standards for 3-chip cameras, although many modern designs use any convenient filter, then approximate the same results electronically. There is no commonly-agreed standard for single-chip cameras and it's worth bearing in mind that the human eye also has sensitivity in red, green and blue — though they're so overlapping that it's a stretch to really refer to them by specific colours. They're more accurately described as being sensitive, respectively, to reddish-yellow and lemon colours, lemon through orange all the way to teal and anywhere from mid-green to blue.

How do humans see colour so keenly, then?

Well, the same proviso applies to this as it does to many other things about human vision: what we think we're seeing is the result of a lot of processing in our brains, just as movie footage is the result of a lot of processing. What cameras can't currently do, is recognise that the object in view is an orange and should, therefore, be appropriately orange coloured, which is a job for AI. A difficult job, but not something that's completely impossible given sufficiently sci-fi assumptions.

Tags: Production

Comments