<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Higher bitrates don't always mean a better picture: here's why

3 minute read

Replay: Broadcast standards are defined to ensure a minimal picture quality through the broadcast chain. But do the minimum bitrate requirements actually stack up in the realworld? Read on to find out. 

ShutterstockAre broadcast standards all they are cracked to be?

Cameras have far too many variables to be meaningfully described by a single number. Naturally, this doesn't stop people trying to do so. For a long time – since the days of entirely analogue cameras – performance was boiled down into resolution, which was far from the whole story even then. Now, that measure has lost more or less all its meaning. It's possible to make sensors of essentially any desired resolution, trading off size, sensitivity and sharpness into something that's almost a zero-sum game by the time that image hits the viewer's screen.

So, broadcasters are grabbing for other numbers by which to judge a camera and one of the most easily grabbed is how much data the camera records. As evidence of this, notice that several cameras released in recent years have implemented 50-megabit recording options for HD pictures, probably because several broadcasters mandated that as a rather arbitrary minimum.

There are, however, more than a few problems with this.

Not all compressors are equal

First and most obviously, not all cameras use the same encoding mathematics and not all of those different encoders achieve the same picture quality for a given bitrate. At least some of those 50-megabit cameras use MPEG-2 compression which dates back to the mid-90s. There are so many variables that it's difficult to compare compression techniques, but the rule of thumb is that the more modern H.264, which is the basis for a lot of camera and distribution encoding, achieves the same picture quality as MPEG-2 at half the bitrate. It's literally twice as good for the penalty of more complex electronics. The even more recent HEVC (or H.265) is about twice as good again.

HEVC isn't yet widely used in cameras, although it is gaining traction, but the principle applies to every encoder out there; some are better than others. We've already established that there's at least a 4:1 range in the effectiveness of different options so that the idea of using bitrate as an ultimate arbiter of quality starts to look pretty silly. And it gets worse because even if everything else is equal, not every example of a particular encoder does exactly the same job.

Some systems are technically described by the decoder, not the encoder. For instance, any H.264 encoder which produces output the decoder can handle is de facto valid. That's a perfectly reasonable way to express a specification, but H.264 includes a lot of different compression techniques, some of which are optional and not used by every encoder. A simpler encoder might consume less power, run on cheaper, smaller hardware or work at higher resolutions and higher frame rates. The difference is in what sort of image quality that simple encoder can achieve for a given bitrate.

How do you effectively measure 'picture quality'?

It is notoriously difficult to measure that difference because there really isn't a completely reliable way to measure perceived image quality in the first place. The most obvious approach is to mathematically count the changes in the pixels that make up the picture, compare the image sent to the encoder and the image recovered, once it's been decoded again. This is a simple numeric measure – signal-to-noise ratio – and it is widely used. The problem is that we're reaching, again, for simple numbers to describe a very complex problem and it doesn't work very well. Signal-to-noise ratio, as a measure, is notorious for awarding glowing results to compression techniques which don't actually look very good to the human eye.

There are many other reasons why bitrate isn't a great way to judge picture quality. Some encoders take advantage of the similarity between sequential video frames to achieve better compression, which improves the ratio of picture quality to file size hugely. Increasing frame rates tends to make that sort of compression work even better because the changes between sequential frames are smaller. Noisier images, shot in lower light, tend to be harder to compress.Higher resolution is often easier than lower resolution. We could talk about this all day.

No easy answer

In the end, there's no easy answer. It will remain the case that more bitrate is generally better than less, all else being equal. Still, one thing should be clear: demanding a certain minimum bitrate will not achieve a certain minimum picture quality. Doing so may lead to work being wasted on the use of unnecessarily high bitrates, and on the other extreme, it can lead to disappointing results from cameras which fulfil a specification but don't actually work very well. The problems occur commonly when productions need to do blue- or green-screen work or extensive grading because the software which does the processing looks at pictures in a very different way to the human eye.

The only really reliable approach is to test things out. There is, sadly, still no reliable way to put a number on how good a camera is, but take heart. Cameras have never been better. Pocket-sized, pocket-money options now record more data than the hefty shoulder-mount devices of only ten years ago. The only reason any of this is such a big deal is that standards have become so very high.

Article image courtesy of Shutterstock.

Tags: Production

Comments