It looks like we spoke too soon, back in February while bemoaning the lack of a widely-accepted replacement for JPEG, because the fine folks at jpeg.org were even then talking about exactly that. The final call for proposals went out in the middle of last year with the publication of the new standard intended before the end of 2019 and we’re indebted to the estimable Jarek Duda for the heads-up.
Or, to put it another way, what they’re talking about is another replacement for JPEG. This is another one of those situations where it’s quite easy to sit back and recognise that there are all kinds of ways to compress an image and store it in a file. The key is that new revisions are made in a way that can be adopted gradually over time, without disrupting existing working practices. Do that and your new new standard might actually see regular use.
JPEG XL appears to have been developed with that very much in mind, but it was also designed to offer three times the performance, so there’s a compromise to strike. In order to achieve the same picture quality in one-third the data, presumably down to some reasonable minimum, it must use some new mathematics. It has the option to use the discrete cosine transform, which is the process behind everything from the original JPEG through DV, HDCAM and ProRes, but it adds Haar transforms, which are a specific type of wavelet transform not totally dissimilar to those used in JPEG-2000 and other wavelet codecs.
There’s a laundry list of other compression techniques involved (multi-resolution encoding, adaptive quantisation), but what’s important is not specifically what’s on offer but the way it’s been implemented. The JPEG committee itself seems to be keenly aware of the standards-proliferation problem and has clearly gone through some trouble to make the transition as easy as possible. It’s possible to turn an existing JPEG into a JPEG XL and back again without any loss at all, and the JPEG XL will still enjoy some size advantage over the original JPEG.
Serving different versions
We aren’t expecting everyone to diligently convert their photo collections to JPEG XL, though some will, especially if someone’s photo collection is out of the photographer’s hands on a Google Photos server. Mainly, it’s about being able to take a single file that’s part of a website, reduce the disk overhead, and serve that file as either JPEG or JPEG XL as the client device demands. Consumer devices are updated faster now that phones last only a couple of years and download updates automatically even within that time, but backward-compatibility is still important on a system as varied as the internet.
JPEG XL does other things, too – it supports higher bit depth images, for a start. The JPEG Extended paperwork did talk about 12-bit JPEGs, but the capability is so rarely implemented that it may as well not have been part of the standard, but it’s in XL. There is also support for an alpha channel, progressive coding and image bursts, none of which is the end of invention technologically but ensures some boxes are ticked for modern applications, much as is the case with HEIC. No standard in 2019 would be complete without some attention to multi-threaded encoding, of course, and JPEG’s submission standards required “an explanation of the achievable parallelism of the algorithmic blocks for both the encoder and the decoder.” Pity.
As with anything, until this starts to show up in operating systems and hardware devices, it’s entirely a theoretical concern. The enthusiasm with which manufacturers start shipping things like this is greatly affected by patent concerns and the documents refer to a “royalty-free goal”, though that doesn’t stop someone trying it on with a patent claim if they feel they can make it stick.
In the end, it’s a pretty simple situation. Is JPEG old? Yes. Can we do better? Yes. Will this one be widely adopted? Comment below.
Title image: Shutterstock - Vova Shevchuk