The limits of compression and what they mean for cameras

Written by David Shapton

ShutterstockJockeys graphic by shutterstock.com

Compression and how it impacts camera development and pricing.

Someone recently commented on an industry expert's Facebook page that he had noticed a 'fault' with a certain new camera and that there were compression artefacts visible in certain material. The expert replied that this was simply because the material was highly compressed (by thirty times compared to the original) and that it was an inevitable consequence of the process of reducing the volume of data from a camera's sensor so that it could fit onto a sensibly-sized Flash storage device.

What is good enough?

This discussion highlights an almost philosophical question that crops up when you're designing equipment, which is "how good do we have to make it at this price?" It's a process well known in the FMCG (Fast Moving Consumer Goods) business, where it's called 'Value Engineering'. A small food supplier might offer its products to a supermarket, which will initially sell it in its original form. Then, if it does well, they will try to 'value engineer' it, to see, essentially, if they can make it cheaper.

This matters in food retail, because the margins are so small. Large volumes mean that if you can produce a product for less, you can make a LOT more money from it.

It's not quite the same with higher value consumer goods, where often the problem is not the margin, but how to distinguish your own products from each other. And a prime way to do this is to limit the quality of the output.

Now, it's important to note that this is not to say that you can typically take two cameras in the same range separated by a few thousand pounds and just turn off or limit certain features to create a new product, although this does happen sometimes, I'm sure. It's more a case of "if we restrict the camera's ability to do 'X' or 'Y', will it still be fit for the purpose that we're marketing it and might it save us money in making it at the same time?"

This is why you sometimes see 8-bit cameras that have bigger relatives with 10-bit output. Accommodating a ten bit data path with wider buses and interconnects, as well as handling the additional data, can cause step changes in the cost of processing and storage technology - far more than the addition of a mere two extra bits might suggest.

Acceptable compression...

Compression is, of course, one of the biggest factors in determining the architecture of a camera. All footage that comes from a sensor's electronics is uncompressed. It's largely what's available from the real-time uncompressed outputs on cameras, such as SDI and HDMI. But to record this high volume data stream internally means compromises. For cheaper cameras, it means using slower, less expensive storage. For this reason (and to get reasonable recording times), it's normal to use pretty heavy compression, sometimes surprisingly severe. It's not unusual to compress video by a factor of thirty or even fifty even in professional cameras. That means that when you play it back, you're watching a picture that's recreated using only a thirtieth of the original data.

Now, it's important to understand here that you get more than just a thirtieth of the picture: that's the whole point about compression. One thirtieth of the picture would look terrible, as you'd expect. But a good compression system only throws away the stuff that you wouldn't miss anyway. That's the theory. In practice, the more severe the compression, the worse, of course, the picture. But there's an important thing to say here: if the picture looks perfect, you can probably compress it more. It's a bit like when you're learning touch typing: if you're not making mistakes, you can go faster.

It's all a question of deciding where the line is between an acceptable and an unacceptable picture;  this will vary vastly depending on the material being compressed.

It's well known that some content is easier to compress than other material. Animations, for example, have relatively little movement and - usually - simpler backgrounds, which means that you can store them accurately with relatively little data. Sports events, on the other hand, can have fast moving action and complex backgrounds (like the crowd in a football stadium). So does this mean that you should set your quality threshold for the worst possible case? That would mean that you never see artefacts. But what if the worst possible case is something that you will almost never encounter. What if you're a wedding photographer and you never track fast moving objects against complex backgrounds (like you would at a horse race, for example)?

Set this quality threshold too high and the camera could be too expensive for its intended market. Set it too low and you'll have complaints from the users. Set it in the right place and perhaps a few people who are using the camera in unexpected ways will see problems, but the vast majority will think it's a good camera for a good price.

 

RedShark Sound is coming soon. Don't miss the launch - sign up here for updates and a chance to win one of 5  iZotope RX5 Audio Editors.

 

Graphic by Shutterstock

Tags: Technology

Comments

Related Articles

11 August, 2020

Here's how to predict the technology future with three everyday, simple words

Replay: Welcome to a world where technology is developing at such a high rate that we may not be fully aware of how fast things are moving. Welcome...

Read Story

4 August, 2020

How driverless cars will lead to better cameras

Replay: Before very long, and certainly by the year 2025, cars will essentially be computers on wheels, and the self driving cars that will dominate...

Read Story

3 August, 2020

Review: We build a custom Genesis Workstation from Puget Systems

By far the best way to get the ultimate editing performance is with a custom editing computer. Heath McKnight goes through the process of speccing...

Read Story