24 Nov 2019

Why can't there just be one, really good, codec?

  • Written by 
Why can't there just be one, really good, codec? Why can't there just be one, really good, codec? RedShark

Index

Entropy

It is possible to present such algorithms with high entropy data they cannot compress, or cannot compress very well. Formally, entropy refers to complexity or disorder. It's possible to calculate a value representing the entropy of an image; a picture comprising one completely flat colour would have entropy zero. While it might seem that a real-world image, given noisy sensors, fuzzy lenses and all the other imperfections of photography, would contain a lot of randomness and therefore have quite high entropy, it's not as bad as it might seem. Practical images might contain a lot of sky, or a lot of someone's face, or a lot of grass, and within those areas the probability of high contrast and therefore big differences between adjacent pixels are quite low. It is often claimed that photographic images contain lots of redundant information, and this is, as we can see here, strictly true, although that isn't generally what someone discussing a lossy codec might mean when they use the phrase.

A high-entropy image, by comparison, might contain entirely random values, making it difficult to compress using entropy coding techniques because it contains no redundant information; no value is related to any other. An image generated by Photoshop's add-noise filter will have entropy tending toward 1. Images of this sort are likely to be inflated, rather than compressed, by entropy coding, as the number of different values in the input file approaches the number of actual values, with no one value much more common than any other. Without the ability to represent large numbers of values with short codes, the compression scheme ceases to work.

There are also algorithms such as arithmetic coding, which is optional in h.264 and mandatory in h.265/HEVC, which have better compression performance than Huffman's, although at the cost of being harder work for the computer.

Implementing Huffman Style Coding

Many implementations of Huffman style entropy coding achieve something like 2:1 compression, depending on the source. This would, for instance, be sufficient to record 1080p24 10-bit RGB data on an LTO5 tape in realtime, and allows Ut Video to achieve the enticing 50%-more-than-ProRes figure stated above, give or take 10-bit precision. To date, improving the performance of lossless video codecs on practical computer hardware has been a priority. On compression, calculation of a table or tree which allows the best possible encoding is not trivial, and on decompression, generally a JPEG-style image must be decoded, and then have the decoded results of the table applied to it to correct its errors. It is probably not currently reasonable to build a battery-powered field recorder using these techniques, at least not without creating custom silicon to do certain aspects of the job. On a workstation, however, things are different.



|


Phil Rhodes

Phil Rhodes is a Cinematographer, Technologist, Writer and above all Communicator. Never afraid to speak his mind, and always worth listening to, he's a frequent contributor to RedShark.

Twitter Feed