<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

How to understand Digital Signal Processing

5 minute read
Shutterstock.

Ever wondered how Digital Signal Processing (DSP) works?

Digital Signal Processing is everywhere. It's in your TV, your phone, your in-car entertainment system, in hospitals, radar systems and thousands of obscure places that you and I have never thought about. As media professionals, we get to see it in a much rawer state than most consumers. DAWs and NLEs are loaded with DSP. It's part of the fabric of our lives, and yet most of the time, we don't have an intuitive feeling about how it works.

And that's not surprising. Digital Signal Processing tends to be highly abstract, which is to say, "mathematical". The maths behind digital signal processing is eyebrow-raising, but for most people, there's no need to go there because the basic principles are very simple.

Just a quick note about the technical level of this article: it's not technical. Most of what I say here is a massive over-simplification. But for anyone who's never looked at DSP before, it's just a stepping stone to a deeper understanding.

The most astonishing thing I learned about digital audio was that you could turn a continuously varying pressure wave into a string of meaningful digits. That, in itself, is a remarkable transformation. It makes things possible that were previously impossible - like making "perfect" copies. It also means you can perform a mathematical operation on a "signal".

Before you do anything with digital audio, you must take your original signal and turn it into numbers. That's the job of an Analogue to Digital converter. You find these whenever you have a transducer, like a camera sensor or a microphone (the converter isn't normally in the microphone but comes after the preamplifier). It's not an unending sequence of numbers but a well-organised parade of Samples - regular and rapid measurements of an incoming analogue waveform. The samples are numbers that represent the amplitude of the signal's waveform at a point in time. Playing back digital audio is essentially "joining up the dots" to remake the waveform. That's what a Digital to Analogue converter is for. So the process is symmetrical, bookended by an Analogue/Analogue to Digital converter and Digital/Digital to Analogue converter. In between those two types of converters, when your media is all-digital, that's when digital signal processing takes place.


Keyboards such as this are reliant on DSP processing. Image: Shutterstock.

Real-time processing

It's pretty hard to understand how you can write a software program to process a real-time "signal". There's a single principle that's right at the heart of digital signal processing: you have to be able to process each successive sample before the next one arrives. And what "process" means is "run a piece of software" between the arrival of one sample and the next. What it means is that DSP software has to be able to run fast and efficiently. To achieve this, it has to be pared back to the bones and optimised as far as possible.

Chips called Digital Signal Processors differ from CPUs - general-purpose processors - in that they're finely tuned for signal processing. That's a complex set of characteristics, but one of the most important is that they're very fast at the most basic mathematical process in DSP, which is "multiply-add". Surprisingly for those averse to mathematics, that means exactly what it says: take a number, multiply it by another and then add it to a running total (an "accumulator"). It's the basis for many essential DSP operations, and DSP chips can efficiently carry out multiply-add operations.

Let's have a look at some common DSP functions.

Volume

This is probably the most straightforward DSP function. Essentially, you take the current sample, multiply it by a number generated by the position of the volume control knob (or a mixing console's fader), and that's it! Remember that if your sample rate is 48Khz, you'll have to do this every 48,000th of a second without missing a single calculation.

Delay

Delay is fundamental to a lot of audio effects, and it's pretty straightforward; once you get the hang of the idea that you have to store the current sample for the length of the delay and then play it out again. Suppose the delay is to synchronise some incoming audio to a video signal. In that case, the process is simply to store as many samples as are equal to the desired delay and then play them out again, delayed! For echo-type effects, just add the delayed signal's samples to whatever the "current" samples are. For multiple echoes, do this several times at different intervals - reducing the volume of the delayed samples for each additional time they're fed back into the signal chain.

Reverb

Reverb is closely related to delay and indeed depends on it. You could define reverberation as what happens when there are multiple or complex delays that add together to give what sounds like a continuous "tail" of sound from the original event. The most obvious example is a cathedral, where there can be very long reverberation times. To achieve this in DSP, it's primarily a process of "designing" the Reverb with multiple delay "taps" and probably inserting some selective EQ to make the "tail" sound more natural.

In practice, Reverb is as much an art as a science, and there are no simple formulas. Companies that produce high-quality reverb units probably have all kinds of secrets they don't want to divulge - because that's the essence of their products. To sum up: Reverb is simple to understand and challenging to do well. But when it is done well, the results can be spectacularly good.

EQ

EQ in DSP involves some tricky concepts. In essence, it, too, involves delays that are short enough to affect specific frequencies through addition, subtraction and multiplication. There are two fundamental DSP filters: Finite Impulse Response (FIR) and Infinite Impulse Response (IIR). Finite Impulse Response filters have a linear phase response (ie a fixed delay for all frequencies) but are more demanding to calculate and tend to have higher latency. IIR filters are easier to implement but suffer from non-linear phase response, although, curiously, this can make it easier to mimic analogue filters.

Mixing

What happens when you mix signals together? You add them to each other. The digital version of this is, literally, adding the values of each incoming channel's samples together. Before you do this, you can change the level of a channel, EQ it and possibly add other processes like compression.

One thing to watch for is that mixing numbers together leads to even bigger numbers. So if you're mixing 16-bit numbers, your mix bus needs to be wider than 16 bits, or you'll get clipping. Most digital mixing consoles have internal 32-bit floating-point processing. That gives you almost as much internal dynamic range as the Big Bang.

The state of DSP today

It will be no surprise that today's processors are literally thousands of times more potent than the earliest DSP chips. Instead of mixing eight or perhaps sixteen channels, today's DSPs can handle tens of thousands. Other types of not-so-specialised but still extremely fast processors are very capable of digital signal processing too. Perhaps the star performers amongst non-DSP-specialist chips are FPGAs. These vast arrays of blank logic gates can be configured on boot-up to behave like sophisticated custom-designed processors. It's often said that they can run software at hardware speeds. They're expensive, but they're unbeatable for low-volume, specialist purposes.

Meanwhile, CPUs, the general-purpose processors that power most desktop and laptop computers, have their own DSP talents. For the last two decades, CPUs have included DSP-friendly architectures that give them surprisingly powerful DSP capabilities. Anyone with a mid-level laptop can now record, edit and mix music with all the tools of a high-level recording studio (except microphones!) available as software packages.

Beyond the demands of professional and consumer audio, DSP has reached extraordinary heights. Perhaps the most thought-provoking of these is SDR: Software Defined Radio. It's now possible to perform DSP not just at audio frequencies but at radio frequencies. With simple hardware, you can feed a spectrum of radio signals into a DSP program, and it will demodulate and decode the radio frequencies into audio. The DSP literally replaces all the components inside a radio receiver.

DSP will continue to evolve, boosted by increasingly powerful hardware and unbelievably clever software. No doubt AI will be part of the mix, as it already is for tasks like isolating vocal tracks in a process that was always impossible: un-mixing a mix.

Digital Signal processing is, of course, massively important to video. Many of the basics are the same, with some crucial differences. But if you want to understand DSP for video, audio is still an excellent place to start.

The best thing about DSP is, perhaps, that we never have to think about it. For most of us, it's invisible. But if we didn't have it, our professional and leisure lives would be radically different.

Tags: Audio Technology

Comments