<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

RedShark Summer Replay: Is the end of compression in sight?

3 minute read

RedSharkCompression

Redshark's only 10 months old, and our readership is growing all the time. So if you're a new arrival here you'll have missed some great articles from earlier in the year

These RedShark articles are too good to waste! So we're re-publishing them one per day for the next two weeks, under the banner "RedShark Summer Replay".

Here's today's Replay:

 

Is the end of compression in sight?

Compression is, as every schoolchild knows, bad. Even so, there are more reasons than the obvious why it can be a questionable idea to involve a lot of mathematics in the storage of images.

With the very best equipment – such as Sony's HDCAM-SR tape format or SR Memory flash system – the recorded noise due to compression can be considerably less than the noise in the camera. In these systems, compression artifacts can be as trivial as errors in the least significant bit or two in any one pixel. This is technology to which any desktop computer has access, too; certain varieties of SR use compression based on the MPEG-4 standard, as does Panasonic's AVC Ultra spec, announced at NAB last year. Even more than this, it's actually possible to compress image data with absolutely no loss of quality whatsoever. Techniques such as Huffman coding, an entropy encoding algorithm developed in 1952 by computer scientist David A. Huffman, can reduce the space required to store information by 1.5 or perhaps 2 times, and yet recover the original data precisely. A complete description of the information-theory witchcraft required to achieve this is beyond the scope of this article, but it has certainly been implemented as a video codec called HuffYUV which operates exactly as advertised.

Lossless is possible

The work required to perform Huffman encoding on the huge data sets of video footage makes it impractical for use in most cameras, and many cameras will require more reduction in bitrate than it can provide. Certain varieties of h.264 – not the varieties commonly used in cameras – use arithmetic coding as part of its suite of technologies, another lossless compression technique. Arithmetic coding also takes a lot of work to encode (though not much to decode), and in any case it is not used as the sole mechanism of compression in h.264.

The thing is, even if lossless compression techniques were widely applied in cameras, which they aren't, and even if there were absolutely no concerns over image quality with lossy codecs, which there are, the deeper problem would remain: the problem of compatibility.

 


DPX: Old but good

Possibly the oldest widely-used format for the storage of high-resolution motion picture material is the DPX sequence, a simple stack of individual frames with their image data stored in a relatively simple wrapper describing the frame size and other basic metadata. Most people are familiar with this approach and it is widely supported (although Adobe Premiere only added DPX sequence support relatively recently, and the performance could be better). Footage stored as a DPX sequence is readable now, has been readable for decades, and is likely to remain readable for many decades to come. Even if SMPTE standard 268M-2003, which describes DPX files, became unavailable, any competent computer scientist with a basic understanding of digital imaging would, in reasonable time, be able to work out how to reconstruct the image stored in such a file simply from examining the file itself.

Difficult situation

This is not so with most compressed formats, which require advanced mathematical treatment and considerable prerequisite data to be viewable. This potentially creates a difficult situation with regard to availability of archive material. Historically, important material was often shot on 35mm film, and anyone could create a device to view 35mm film because its approach to storing moving images is immediately obvious on examining the media; conversely, we are already faced with difficulties accessing early video material, and the plethora of digital image formats and storage devices creates a considerable problem.

But it isn't just about the archive, as important as that is from a cultural history point of view. It creates real restrictions right now on how people create workflows. When manufacturers use proprietary video formats, even so as to have access to desirable compression techniques, we inevitably face restrictions on the software we can use to handle them. Software engineers must spend time implementing new formats, instead of doing work on other features of nonlinear edit and effects software. Postproduction people must spend time on transcoding and conversion. Were the formats simpler, they would be easier to support, and this time-consuming make-work would almost certainly become less necessary. This is a problem which goes away in time, as formats become more commonly used, although it would be better for it not to exist in the first place and this is to some extent a matter of commercial imperative.

One day


Ultimately, it's important to realise that compression algorithms, no matter how clever, do no good thing other than make storage easier. OK; making storage easier is a very big deal, and it has made flash recording first possible then easy. But other than that, compression consumes ferociously expensive CPU time, takes up expensive silicon space on circuit boards, eats up heavy and expensive battery power, introduces latency and delay, restricts compatibility, and, yes, saps quality. And with flash storage getting bigger and cheaper incredibly quickly, it's reasonable to hope that compressed video formats will as quickly as possible be consigned to history.

Tags: Technology

Comments