Redshark's only 10 months old, and our readership is growing all the time. So if you're a new arrival here you'll have missed some great articles from earlier in the year
These RedShark articles are too good to waste! So we're re-publishing them one per day for the next two weeks, under the banner "RedShark Summer Replay".
Here's today's Replay:
What do we lose when we go digital?
My article last week about whether you could build a camera in a shed sparked off discussions in all directions, but a persistent one was whether you lose something when you go for analogue to digital, and in particular, whether audio vinyl disks will always sound better than digital reproduction.This is a fascinating subject that, seemingly, won't go away. But it should have by now, surely? It also has a lot in common with film vs digital video recording
We can now sample audio and video at such high resolutions that there should be no question that it will look and sound better. Is there anyone who would prefer scratchy, dusty spinning record to a pristine 192 Khz sampled 24 bit recording?
Well, yes, there is. And it's just possible that I might be one of them.
Perhaps I'd better qualify that.
I’ve spent the last 28 years working with digital audio, but in fact, "Going Digital" has been going on for a lot longer than that. Incredibly, some of the digital processing techniques that are in use in today's studios and edit suites were invented in the 1920s and 30s. They were only theoretical then, and it was only in the 60s and 70s in forward-looking research establishments like IRCAM (Institut de Recherche et Coordination Acoustique/Musique) that computer music composition programs started to produce real results - even if it took a week of number-crunching on those early computers to generate a few seconds of synthesised sound.
So I know what digital audio sounds like; good and bad. The sad fact is that most of it - compared to how good it could be - is actually pretty bad. Psychoacoustic compression like MP3 hasn't helped to maintain quality (unless you're talking about restricted bandwidth, in which case it has vastly improved the quality of audio that you can push, say, down a copper telephone line).
The basics of digital audio sampling are pretty simple to understand. It's the unwanted consequences that can be harder to explain and to eliminate. I'm not going to go through this stuff in detail here because it would make for an extremely long article, but what I will say is that some of the most basic assumptions about sampling at the very least fail to tell the whole story. Suffice to say that the generally held notion that if you sample at twice the highest wanted frequency then you'll get all the frequencies you need reproduced is only true if you look at it edgeways. There's all manner of stuff that happens as a consequence of sampling and it's not impossible - indeed it's quite likely - that two digital audio devices playing the same material will sound discernibly different.
But sampling is only one part of the digital audio equation. The other part is the number of bits used to describe the audio.
CDs use 16 bits and, used properly, can sound pretty good in terms of the dynamic range they can reproduce. It's quite rare - and potentially harmful to your loudspeakers - to need more than 16 bits to describe any recording, but if you focus on the quiet sections, they don't sound as good as the loud ones. This shouldn't matter, because they'll be so quiet that you won't notice, but in practise, most recordings don't use anything like all 16 bits - if they did it would have been a miracle when they were being recorded that they didn't over-modulate and cause some nasty digital distortion whenever there were unexpected peaks.
Think of it like this. If you use all 16 bits, you have over 65,000 levels to describe the amplitude of a sound. But quieter sounds may use only 8 bits, which means that there are only 256 levels. You can hear the difference. The worst case is if you have, say, a sustained piano chord which might take almost 30 seconds to die away to nothing. As you get towards the tail of the sound, what you'll hear is mostly distortion, as the digital system struggles to reproduce the waveform with as few as two or three bits of resolution. A silky-smooth sine wave can easily be reduced to nothing more than an unpleasant sounding staircase.
This type of artifact, called Quantization Noise, is particularly unpleasant, but it is intrinsic to any system that is based on sampling. The best you can do is minimise it by using more bits, and 24 bits seem to do a very good job at the same time as being a practical proposition for modern equipment and recording mediums, except that CDs remain stubbornly 16 bit and 44.1KHz sample rate. (Higher quality versions such as Super Audio CD and DVD Audio never really found a mass market, and seem incredibly niche products in a world where the average consumer thinks that 128 Kbit/s MP3s sound good).
There’s another issue as well when going from analogue to digital: the quality of the conversion.
It’s fair to say that modern analogue to digital converters are remarkably good and they find their way into everything these days. It’s still the case, though, that the more you pay, the better they tend to be; it’s just that the baseline is now pretty high and for most everyday purposes, you don’t need to give A/D converters a second thought.
But in critical listening situations, there most certainly are differences.
One element of A/D conversion is very tricky: it’s where you’re digitising a low level signal. We’ve already seen that there are issues because of the limited number of bits available to deal with a quiet signal, but there’s another thing - one from the analogue world - that can make a big difference to the quality of a converter as well, and it’s quite easy to understand.
When you digitise a loud signal, it’s easy enough to measure - because it’s loud and there’s a lot of it. If you’re a few percentage points off in your estimation, then it doesn’t matter because there’s a lot of leeway at this high level, and the analogue components that are used to handle the signals at these strengths don’t have to be very accurate. But at very low levels, the component values have to be very precise in order to present a signal that will trigger the decisions as to whether one level or the next is used to represent the sound. If these components aren’t precise, then the signal will be distorted.
There are ways to get round this, including an entirely different way of sampling the audio using only 1 bit of quantization but a very high sample rate. This is the method used with Super Audio CDs. It’s beyond the scope of this article (although if anyone’s interested, we can do a separate piece on it) but, broadly, you avoid the distortion at low levels because the single bit used to encode the audio is either on or off and it’s the statistical frequency with which the bit is high or low that determines the amplitude of the decoded signal. There is no need for precise quantization around the zero level because the accuracy is in the statistics rather than an absolute binary number representing amplitude. The drawback with this system is that it’s incompatible with conventional digital signal processing and needs an entirely dedicated equipment chain to make it work.
So, to sum up so far, digital audio does damage sound, but very high sample rates and a decent bit-depth can minimise the degradation.
The big question
But the question remains: can digital ever sound as good as analogue?
Well, part of me wants to say that, obviously, it can. And that's simply because a high bitrate, high sample rate reproduction of a recording made in a studio will be an exact digital copy, with no tape noise, and no distortion. Of course it has to sound better than an analogue recording, where every single stage in the recorded signal's journey to our ears is effectively a lossy one. At every point even down a simple analogue cable, the signal sounds worse than at the previous point. You can never make an analogue copy that is as good as the original - but here's the thing: after several generations of copies, a digital should sound considerably better than an analogue copy. And yet, even though we're almost never in a position to listen to the master recording - unless it's a digital one - analogue enthusiasts still insist that scraping a needle through a jagged groove can sound better than any digital recording.
To get to the bottom of this, we have to look at it from another direction.
What is the most significant difference between analogue and digital recordings? Perhaps it is that with a digital recording, there is a limit to the resolution, which is determined - as we've seen above already - by the sample rate and by the "quantization". So, any sound (or object within an image) that is smaller than a single quantization step simply won't be recorded. There could be a whole world of activity going on in between these levels, but it won't exist in the digital version. Even though - somewhat by definition - if it's less than a single quantization measure it shouldn't be noticeable anyway, you have to wonder what might be the effect of the total absence of this entire micro-cosmos of activity that is simply discarded with a digital recording. And if this is the case with amplitude, what about with frequencies? Anything more than half the sample rate with either be lost, or, worse (if there aren't adequate and suitable low-pass filters in place) will be reflected back into the audible spectrum as non-harmonically related (ie unpleasant) sounds.
There are undoubtedly sounds that happen in the natural world that are higher than the frequencies that we can hear, but which do have an effect on the overall quality of the sound. One simple mechanism that might explain this is that when two frequencies collide, they interact (through addition and subtraction) to make a third frequency - a so-called “beat” frequency - some of which will be back in the audio spectrum. This is going on around us all the time with almost infinite complexity and subtlety, and to me it seems likely that if we simply blank out these higher-frequency sounds, then we will be affecting the audible part of the spectrum - and our perception of it - in some way.
Of course the frequency range of an analogue recording is not infinite, but at least the upper frequency limit is a gradual one and not a brick wall.
With analogue recording, even very quiet sounds can be recorded. Again, of course, there is a limit to this, but it won’t be a strict one, as it will be determined by noise and the capabilities of the overall system rather than a fixed level determined by the number of bits available, below which there will just be silky silence.
It’s hard to know how far to push with the notion that with analogue, everything - everything - is recorded. But it is worth thinking about it even if only in a philosophical sense. And although it might be tempting to dismiss this idea by saying that these very low level sounds will be completely obscured by the rumbling and crackling from a record deck, or hiss from a tape recording, our ears do have a remarkable ability to hear though background noise. Perhaps we average it over time and somehow filter it out. But whatever the mechanism, there is at least an argument that says there is more low-level detail in an analogue recording than in a digital one.
We’ve focused on audio here, but next, we’re going to look at film vs digital recording. A lot of what you’ve read here applies in the visual domain as well, but there are some very significant differences as well.