Is the end of compression in sight?

Written by Phil Rhodes

RedSharkCompression

Compression is, as every schoolchild knows, bad. Even so, there are more reasons than the obvious why it can be a questionable idea to involve a lot of mathematics in the storage of images

With the very best equipment – such as Sony's HDCAM-SR tape format or SR Memory flash system – the recorded noise due to compression can be considerably less than the noise in the camera. In these systems, compression artifacts can be as trivial as errors in the least significant bit or two in any one pixel. This is technology to which any desktop computer has access, too; certain varieties of SR use compression based on the MPEG-4 standard, as does Panasonic's AVC Ultra spec, announced at NAB last year. Even more than this, it's actually possible to compress image data with absolutely no loss of quality whatsoever. Techniques such as Huffman coding, an entropy encoding algorithm developed in 1952 by computer scientist David A. Huffman, can reduce the space required to store information by 1.5 or perhaps 2 times, and yet recover the original data precisely. A complete description of the information-theory witchcraft required to achieve this is beyond the scope of this article, but it has certainly been implemented as a video codec called HuffYUV which operates exactly as advertised.

Lossless is possible

The work required to perform Huffman encoding on the huge data sets of video footage makes it impractical for use in most cameras, and many cameras will require more reduction in bitrate than it can provide. Certain varieties of h.264 – not the varieties commonly used in cameras – use arithmetic coding as part of its suite of technologies, another lossless compression technique. Arithmetic coding also takes a lot of work to encode (though not much to decode), and in any case it is not used as the sole mechanism of compression in h.264.

The thing is, even if lossless compression techniques were widely applied in cameras, which they aren't, and even if there were absolutely no concerns over image quality with lossy codecs, which there are, the deeper problem would remain: the problem of compatibility.


DPX: Old but good

Possibly the oldest widely-used format for the storage of high-resolution motion picture material is the DPX sequence, a simple stack of individual frames with their image data stored in a relatively simple wrapper describing the frame size and other basic metadata. Most people are familiar with this approach and it is widely supported (although Adobe Premiere only added DPX sequence support relatively recently, and the performance could be better). Footage stored as a DPX sequence is readable now, has been readable for decades, and is likely to remain readable for many decades to come. Even if SMPTE standard 268M-2003, which describes DPX files, became unavailable, any competent computer scientist with a basic understanding of digital imaging would, in reasonable time, be able to work out how to reconstruct the image stored in such a file simply from examining the file itself.

Difficult situation

This is not so with most compressed formats, which require advanced mathematical treatment and considerable prerequisite data to be viewable. This potentially creates a difficult situation with regard to availability of archive material. Historically, important material was often shot on 35mm film, and anyone could create a device to view 35mm film because its approach to storing moving images is immediately obvious on examining the media; conversely, we are already faced with difficulties accessing early video material, and the plethora of digital image formats and storage devices creates a considerable problem.

But it isn't just about the archive, as important as that is from a cultural history point of view. It creates real restrictions right now on how people create workflows. When manufacturers use proprietary video formats, even so as to have access to desirable compression techniques, we inevitably face restrictions on the software we can use to handle them. Software engineers must spend time implementing new formats, instead of doing work on other features of nonlinear edit and effects software. Postproduction people must spend time on transcoding and conversion. Were the formats simpler, they would be easier to support, and this time-consuming make-work would almost certainly become less necessary. This is a problem which goes away in time, as formats become more commonly used, although it would be better for it not to exist in the first place and this is to some extent a matter of commercial imperative.

One day


Ultimately, it's important to realise that compression algorithms, no matter how clever, do no good thing other than make storage easier. OK; making storage easier is a very big deal, and it has made flash recording first possible then easy. But other than that, compression consumes ferociously expensive CPU time, takes up expensive silicon space on circuit boards, eats up heavy and expensive battery power, introduces latency and delay, restricts compatibility, and, yes, saps quality. And with flash storage getting bigger and cheaper incredibly quickly, it's reasonable to hope that compressed video formats will as quickly as possible be consigned to history.

Tags: Technology

Comments

Related Articles

3 July, 2020

Frame.io: What is the future for remote workflows?

Frame.io’s online Workflow From Home series, hosted by the company’s Global SVP of Innovation, Michael Cioni, has been a definitive look into how to...

Read Story

30 June, 2020

WWDC20 - macOS Big Sur and iOS 14 grow closer

While the Intel-ARM transition was the blockbuster news of last week's WWDC20, it was far from the only news with significant steps forward for...

Read Story

30 June, 2020

Metalenses: Is the future of the lens flat?

RedShark Replay: A new report in the journal Science points the way to a new lens of the future that could revolutionise optics entirely.

Individual...

Read Story