<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

We may have solved the mystery of why film just looks better

8 minute read

RedSharkThe art of noise

RedShark Replay:  It just won’t go away, will it? However much you can prove with specifications that digital video is indisputably better than film, there’s a stubborn feeling that there’s more to it than the simple-to-prove facts. We think we've identified one, subtle, process that helps film to store more visible information than digital.

Recently we asked for reader’s opinions on this, and we had a good response, although much of it was rather predictable. Some said that we shouldn’t be comparing the two at all. Some said that whatever anyone wants to believe, that film will always be better - even going on to say that something is “lost” when we digitise things.

All of which may be true. But I think we’ve at last stumbled on something that might be tangible. It’s to do with the fundamental difference between film and digital.

It’s fairly easy to explain, but not that easy. And remember - this is just our theory: we're not going to be dogmatic about this and if anyone can prove us wrong, that's fine with us.

Here goes.

Film doesn't have pixels

Both film and digital have a limit to their resolutions. With digital, the fundamental unit of resolution is the Pixel. You can count them easily as they’re arranged in a grid. There’s a slight (well actually rather a severe) complication here, which is that in order to get colour out of a modern, single, sensor, you have to have a Bayer-pattern filter, which does reduce the resolution by the time its output has been run through debayering software that kind of guesses what colour each pixel should be, based on the ones around it. This makes it difficult to state the exact resolution but as Bayer algorithms get better and resolutions get higher, it doesn’t change the fundamental dynamics of the film vs digital debate.

Film doesn’t have a grid of pixels. Far from it. What it has instead is essentially random shaped crystals of chemicals. And of course these vary completely from frame to frame, and between different parts of the same frame.

So, whereas with a digital system, the grid doesn’t move, there isn’t a grid at all with film, where, if you try to look for corresponding areas of light on successive frames, you won’t find them on a microscopic level.

So, you’d be perfectly entitled to ask at this point whether, how, or why this matters, when the film grain is too small to see.

Well, you can see film grain. That’s one of the things about film. It needn’t be obtrusive, and it won’t be foremost in your mind, but it will undoubtedly have an effect on the viewers perception of the moving images.

But there’s more to it than grain vs no grain, not least because you can always add “grain” to digital images. No, the effect is much more subtle, and yet more profound, than that.

This is where we’re going to talk about a concept that’s called “Dither”. This rather indecisive-sounding term is a powerful concept in the mechanics of digital media.

The question of "dither"

Strictly, dither is noise which is added to an analogue signal, usually at the point just before it is digitised. You might wonder what’s the point of this when the whole thing about digital media is to have a clean signal without noise.

Let’s explain this with a simple example.

You’ve probably seen those contour lines in a blue sky on television or in videos. This is an illustration that you need an awful lot of colours to describe the very shallow gradients that you get in a blue sky. In reality, the sky is not simply“blue” at all, but every shade of blue you can imagine (and more) that fill the continuum between the lightest shade of sky and the darkest one.

You would need an almost infinite number of colours to produce a perfect rendition of this type of sky, but, luckily, you don’t need that many, because we only need a finite, albeit quite large, number for our eyes to see a continuous gradient.

But we probably need more colours than the 256 per colour channel that 8 bit video can give us. That’s why you often see distinct bands of colour in blue skies, separated by a sudden jump to the next available colour.

This is called “quantization noise” and is one of the most unpleasant after-effects of digitisation. It’s so undesirable, that even quite drastic measures are sometimes preferable to the effect itself. One such measure is adding noise. Of course, this sounds completely bizarre in a context where we’re trying to make a picture better, but it’s not that bad. Because when you add noise - or “dither” - it’s at a very low level. In fact, it’s at such a low level that it should only make a difference of less than one bit.

But surely, if it’s that quiet - and less than one bit - it can’t make any difference at all?

This is where it starts to get a bit strange

Well, this is where it starts to get a bit strange.

In the scenario we’ve talked about above, where we’re trying to lessen the effect of having too few bits, the media has already been digitised, so we can’t actually add information. Noise, in itself, certainly isn't information - by definition. We would probably have to add somewhat more than one bit’s worth of noise to make a difference here. Once we’ve done that, the effect is pretty remarkable, because the noise has the effect of randomising the position of the contour lines. Effectively, they just disappear.

If we’re prepared to accept the overall degradation caused by adding noise, then it’s often an acceptable compromise: a little gentle noise is far easier on the eye than unnatural-looking contour lines.

So that’s one way in which noise can improve the look of a moving image. Ironically, I believe that many 8 bit cameras look better through having a slightly noisier image because of exactly this effect!

Now, here’s where it gets almost magical - but, trust me, it isn’t: it’s soundly scientific.

There’s a difference between adding noise to a signal that’s already digitised to one that’s still analogue. Here’s an example that shows that the phenomenon has been known for at least seventy years!


Lessons from the aircraft industry

Back in the Second World War, when aircraft were being punched out of factories at an unbelievable rate, the quality of manufacture was perhaps not as important as the sheer production volume. This is a polite way to say that precision possibly took a back seat to proliferation.

So although the aircraft largely did their job, they were as basic as it was possible to make them, and some aspects of them didn’t work as well as you might have hoped.

Apparently one example of this was the dials and gauges on the aircraft's’ dashboards. Whereas today we’d expect the needles on on the dials to move smoothly, the displays in the plane’s cockpit were sometimes barely functional. The worst aspect was that they tended to move jerkily, constantly sticking so that they would move in jumps rather than gradually.

Until, that is, the engines were started and revved up. There was nothing very subtle about the engines, either, and they made the aircraft vibrate alarmingly. This was bad for the aircrew, but actually good for the meters, because the vibration completely overrode the “stickiness” and - subject only to inaccuracy caused by the vibration itself - the dials would at last react smoothly to changes.

How does this apply to film?

So, turning to an analogue phenomenon that we’re far more familiar with, how does this apply to film?

Ironically, it’s easier to explain how analogue randomness improves the resolution of film by going back into the digital domain.

One of the complaints about digital media - let’s stick to audio for now - is that it’s very unforgiving. Its very precision is almost an affront to the unquantizable nature of the analogue world. You know the sort of thing: it’s like trying to describe the beauty of a rose using only the numbers one to two hundred and fifty six.

The problem’s particularly bad with low level audio - like the last few seconds of a piano note as it fades away into silence.

At normal listening volumes it’s not too much of a problem because in a 16 bit audio system, if 16 bits is loud, 1 or 2 bits will be very, very quiet, but, nevertheless, if you look at what the audio from a piano looks like when it is so quiet that it’s represented by only 2, 3 or 4 bits, you’ll find it resembles a staircase more than a nice smooth waveform.

And this is where random noise can help.

In between 1 and 0

Let’s imagine that you’re left with only one bit to represent your piano waveform: good luck with that because it will sound terrible! But this is an extreme example, just to make the point.

You will probably find that when the waveform is below 50% of it’s total amplitude the digital version of it will be represented by a 0, and when it’s above 50%, the number representing it will be 1.

But what about when it’s 55%? Well, it will obviously be a 1. And 45%? That’s a 0.

Now, imagine adding noise to the signal that occasionally pushed the 55% level below 50% - which would mean that sometimes a 55% signal would be represented by a 0. At an instant, that would be an error, clearly. But over time, these “errors” start to be significant, because they actually mean that the system is - somehow - noting the fact that some of these intermediate levels exist. It’s a very slight effect, but might actually be significant as well.

Just to recap - adding noise causes “errors” but these errors themselves are significant cumulatively because they indicated levels between 1 and 0.

Of course, we’re not talking about a system that only uses 1s and 0s, but 0s to 65536s. And we’re possibly talking about implicit levels between those - although the effect really is more significant at the lower levels (ie the smaller numbers of bits).

(If you’re wondering how this could possibly be true, a related technique is used in so-called DSD audio, where 1 bit quantization is used with a  very high sample rate (2.8 MHz - 16 times that of CDs). Essentially with DSD, it is the number (or the “density) of 1s and 0s that determines the hight of the waveform at an instant.)

OK - back to the analogue domain again.


Analogue "sampling"

You could argue that even analogue film is “sampled”. The limit to the resolution of any film is the film grain. These particles of silver or dye are randomly shaped and are essentially the “pixels” of film.

But, unlike pixels, not only are they randomly sized and shaped within a single frame, but they will be completely different on successive frames. Essentially, this variability is “noise” in the image.

As such, as we have just seen in this article, noise can be a good thing, because it is can make a sampling system capable of expressing more detail than if it wasn’t there.

Now, we’re getting into difficult territory here, because this is where we go from something that might be OK mathematically and scientifically, to something that may be little better than guesswork.

But here it is.

Why film is better

We think the reason film is “better” than digital is because even though it is “sampled” at the resolution of the film grain, very subtle colours areas of colour and luminance can influence successive grain particles to show more detail than could be shown in a still image.

You can see how this might work if you think about VHS tape recordings. VHS was one of the worst recording mediums ever - it was incredibly noisy. If you freeze frame a VHS tape, the picture always looks terrible. But if you play the tape, it still isn’t great but it’s better. That’s because we’re able to take the average over time and “see through” the noise. Of course, VHS tape is analogue.

It’s similar with film. Remember that the grain particles are random. Essentially, they’re all different. There’s no rigid grid though which we have to paint the picture using numbers.

So while the size of the grain particles do limit the resolution of the film, with a moving picture, we’re able to see through the grain particles and cognitively build up a more detailed picture than the one we’re presented with - one with potentially no fixed limit to resolution at all, other than the quality of the optics in the camera and the projector.

Perhaps in future, if we want to maximise the resolution and “naturalness” of our digital film making, we should lower the lighting and rely on sensor noise to provide some randomness to allow this effect to take place. I suspect that this is one reason why relatively cheap 8 bit video cameras are able to take such surprisingly good pictures: the sensor noise helps to smooth over the lack of colours in an 8 bit system.

Now, it’s perfectly fair to say that digital systems can and will always get better. At some point, the sheer precision and ultimate resolution of digital systems will make all of the above discussion irrelevant. In fact some might say (including me!) that we’ve already reached that point: that digital is simply “better”.

But there’s at least part of me that wants to say that, just as vinyl records seem to have that indefinable depth of nuance that is lacking in CDs (which, by the way, are not by any means the highest quality recording medium available), film, with all its imperfections - and perhaps because of its imperfections - will always be a “layer” closer to reality - and hence perfection.

Tags: Technology

Comments