24 Nov 2019

Why can't there just be one, really good, codec?

  • Written by 
Why can't there just be one, really good, codec? Why can't there just be one, really good, codec? RedShark

Index

No Adobe Interest

Thus there is no particular technological genius involved, and Adobe could quite trivially come up with some combination of preexisting technologies and stamp it with their logo. This wouldn't help anyone, since it adds another layer of incompatibility to the situation and not necessarily affect the way in which Premiere can do certain things with certain types of media. This is capably handled by current versions of Premiere in any case, since they have a level media compatibility on a single timeline which can still raise eyebrows to those used to other systems. To make a proprietary codec look good they'd probably actually have to start disabling things if it wasn't in use, which is clearly ludicrous (although some people have probably done worse in the name of vendor lock-in in the past). But in general, for all these reasons, it's refreshing that Adobe report absolutely no interest in going down the route of developing a branded codec, having realised the evils that lie therein.

So, we can't have uncompressed and the current approach to proprietary codecs – being as DnxHD is still fundamentally an AVID thing, albeit obstensibly open – is less than ideal. And now I've written myself into the corner of having to suggest a reasonable alternative, but that's good, because there is actually an option out there which could hit both issues at once.

A Reasonable Alternative

The video codecs HuffYUV (or its more modern incarnation Lagarith) and Ut Video are both open source and could thus be used by anyone, and provide at least something of an answer to our desire for uncompressed images. Both use clever mathematical techniques to achieve compression without sacrificing image quality, and it's my observation that Ut Video, at least, can provide 4:4:4 8-bit RGB for only about 50% more bitrate than the highest quality ProRes. While this immediately sounds like witchcraft, it is entirely legitimate under formal information theory, and people interested in a formal mathematical description of this may wish to peruse the Wikipedia article on Huffman coding and other approaches to the minimum-redundancy storage of data.

Informally, the approach used is to store the most frequently-encountered values in a stream of data as short codes, while using longer codes to represent less frequently-encountered values. A table (ordered as a tree using David A. Huffman's algorithm) is generated for each frame which maps the codes back to the original values. Therefore, the procedure is lossless and in the sense of a video codec, the stored frame can be precisely recovered. Practical applications tend to use a more conventional image compression algorithm similar to JPEG, and then Huffman encode the error between that compressed image and the original; because the errors tend to be small, there are fewer likely values, and the encoding can work better.



|


Phil Rhodes

Phil Rhodes is a Cinematographer, Technologist, Writer and above all Communicator. Never afraid to speak his mind, and always worth listening to, he's a frequent contributor to RedShark.

Twitter Feed