<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Why do images from the very best cameras look different?

5 minute read

Digital Bolex/RedShark PublicationsWhy do images from the very best cameras look different?

How do we measure the largely intangible quality about a given device that somehow makes the pictures from Camera A look better than the ones from Camera B?

"Feeling", "emotion", "beauty", "sensuality": these are not words that you naturally associate with "Computing" or "Digital". And yet, somehow, these concepts that sit so uncomfortably with the seemingly clinical, logical world of IT, most definitely can be expressed by it.

There's an obvious sense in which this is true. If you take a digital picture of a flower, then it's a flower. In so far as the mere mention (or the fuzziest image) of a flower triggers emotions, then, yes, it does make sense to associate those four words with computers.

Strictly, the computer is reproducing the flower as a symbol, and as long as you recognise the symbol, then whatever that symbol means for you will be triggered.The number 7 is a number 7 as soon as you recognise it. It doesn't matter whether it's handwritten, made up of a pattern of pixels, or hand chiseled into the face of a mountain - it's still a number seven.

It's when you go beyond that that things get interesting. Beyond mere symbolism to the tangible quality of digital images.

At the other end of the scale from digital reproduction that is so bad that all we can say is "I think that's a flower" is the ability we now have to capture moving and still images better than film ever could. To do this we need to have a lot of storage, and a lot of pixels. Essentially, the more pixels the better (although that assumes that back in the analogue domain, you have lenses that are good too).

There's good news at this end of the scale. We are absolutely at the point where we can capture images that are so good that it's hard to think how we can improve them (at a sensible cost). And shadowing this discussion is another one: whether it's necessary to improve them at all.

Now, here's where it starts to get very interesting and more than a little controversial. Because if our images are now so good that we don't know how to improve them, then, logically, they must all look the same.

But they don't.

So, either our images aren't that good after all, or there is something about them that differs from camera to camera that is not just about resolution or accuracy. It's just possible that certain inaccuracies from certain cameras actually add to our perception of the quality of the image.

I know I keep saying this, but no-one ever complained about the resolution of an oil painting. What's the point I'm making here? It is that there is more to a picture than the rectangular grid of pixels that it is made up from. And we're not really sure exactly what it is.

Certainly there's the "look" of a camera. This is a mixture of at least two factors: the way the camera "is" (the way it is designed, built and calibrated), and the way it is set up to look. This is how you can get two cameras with absolutely the same sensor looking different. A camera sensor is a very raw device - it needs lot of help to get a good picture from a sensor, and in that near-sensor processing, there's a lot that can be tweaked.

But let's say that two cameras, from two different manufacturers, are set up to look the same. Maybe they're both aiming for a neutral, clear, transparent image. In other words, they're set up for "accurate" reproduction.

And yet, they will still look different.

Maybe not so much on first sight. And maybe not at all scientifically. This is actually quite weird, because two cameras that test the same, can still look different.

There are reasons for this. Let's take an example from the world of audio, which is analogous.

Way back when I used to sell Digital Audio Workstations to musicians and audio dubbing editors, I was once asked if we could arrange a comparison between two well known products from different manufacturers.

So we set them up, carefully, so that if there was a difference, it would be due to the DAWs and not the microphones, loudspeakers, A/D and D/A converters, etc.

Interestingly, the customer wanted to hear the difference with everything switched out. No eq, no compression, no gain: just, essentially end to end digital.

At that point, I said there wouldn't be a difference, because we were using the same converters, and all the DAWs were going to do was to pass though an unchanged uncompressed audio signal. There would be, I conjectured, no difference if we took both the DAWs out of the chain and just connected the converters back to back.

But there WAS a difference. We all heard it. It was subtle but unmistakable. 

This was very odd. The customer chose one system over another because he preferred the sound. I wouldn't say that one sounded better than the other but it was clear enough that they weren't the same. I suspect if I'd played a wider range of music, I might have had a preference too.

But what was going on? How could they possibly be different? Surely this is not what's supposed to happen with digital processing? Surely if I read you the number seven, you'll write "7" and not "6.999473".

I can only imagine that inside the digital audio workstations the audio was being converted into a wider space - say 32 bit floating point from our 16 bit input. When you convert back from this, there's every chance that the numbers won't be exact. Likewise with gain. We had gain set to zero, but that doesn't mean that the gain stages were bypassed, just that they were set to not increase or decrease the amplitude of the signal. And if they weren't bypassed, then any errors inherent in the gain algorithms would be imposed on the zero-gain signal as much as on a positive or negatively amplified one.

This is just a tiny example. There could be hundreds of points of inaccuracy within a camera, all of which have the opportunity to change the signal. No wonder cameras look different.

There's a concept in Audiophile circles called "musicality". No-one's quite sure what it means, but it does make sense. At the point where your fifteen thousand pound dollar HiFi amp sounds as good as another fifteen thousand dollar HiFi amp, and yet it sounds different, which one is better for music? The one which is agreed to be better is said to have more musicality than the other one. There may be good scientific reasons for this. Tube (valve) guitar amps, for example are meant to distort the music. That's how you get an electric guitar sound. But it's the way they distort that matters. Tube amplifiers tend to emphasise even numbered harmonics when they distort, and this sounds good. It's musical. 

I'm wondering if we should have an equivalent term to "musicality" to apply to camera images. Something that would allow us to express the largely intangible quality about a given device that somehow makes the pictures look better than another camera that doesn't have that ability.

We do talk about the "look" of a camera, but this isn't (I think!) what I'm getting at here. I'm wondering how we can describe what it is that makes a one camera better at making images of a flower, a face, a landscape, or an interior, better than another one. Maybe this "quality" is comes from the way the camera handles highlights, or it's a specific aspect of how it saturates the colours. There are so many ways that a camera can affect the picture, and it may take only a single aspect of one of them to give the camera a unique quality. 

Strictly, this could all be nonsense. If it is, I'm happy to accept that and I will go and crawl under the nearest rock. But - and please don't accuse me of magical thinking here, because it's not - I can't help feeling that there is something in this that we should be talking about.

Tags: Technology

Comments