<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

What do these dragon images mean?

4 minute read

REDWhat do these dragon images mean?

Over the weekend we brought you the first images that were from the new RED Dragon sensor. We thought they looked sensational, and said so, but before everyone (including us) gets carried away, it's worth injecting a note of caution into the frenzy that surrounds this new sensor

Note that not a single word of this is meant to criticise the Dragon sensor, or the company that made it. We haven't seen one yet, and, critically, we haven't seen the uncompressed output from it either. 

 And that last sentence is the main point of this article. Because until you've seen images directly from a camera - and not via YouTube or Vimeo, then you haven't really seen images from that camera at all.

We've written about this before. But it's even more important to remember with extremely high resolution sensors that compression, several times over, can destroy almost all the unique goodness that comes out of a pristine image.

It's ironic, though, because we wouldn't be seeing these pictures at all if they weren't compressed, and it's again ironic that this is how most people will see them anyway.
Compression isn't altogether a bad thing then, because without it we wouldn't have satellite or cable television, or any of the internet streaming services. It's also very important to understand that if you start with a very clean image, it makes it easier for compression to deal with it and give a good end result.

But compression is a really, really bad thing if you're trying to evaluate the quality of images from a camera.


Image Softening

For a start, one of the first techniques in compression is to soften the image. That's right, no matter how sharp and detailed your picture, compression will soften it.
It's easy to understand why. You're probably familiar with the idea of frequencies as it applies to sound. Pitch is not exactly the same thing as frequency but, mostly, if the pitch goes up, it's as a result of an increase in frequency.

Detailed information lives in high frequencies. If you don't allow high frequencies to be reproduced, you won't hear the rustling of leaves in a forest, and you won't be able to hear properly what people are saying - all the words will blur together.

With images, frequencies are important spatially rather than in time. If this is hard to grasp, think of a chess board. On an ideal chessboard, there is a sharp transition between the squares. If you think about this in terms of rate of change, if you move your finger along one alternating line of white and black squares, you can easily characterise the experience in terms of the rate at which the colour changes. While you're on a square, it actually doesn't change at all, and then as you go from white to black, it changes very suddenly (depending on the width of your finger!).

Now, if your finger were the only way to "see" the difference between black and white squares, it wouldn't matter much if the edges where somewhat blurred. But what if you were instead "seeing" the squares with a pin-head? They would have to be very much sharper. And to achieve this, you'd need to be able to process higher frequencies.

In order to capture those higher spacial frequencies in a camera, you'd need more pixels. If you don't need the sharpness, you need fewer.

But if you've captured the chessboard - or anything, for that matter - with a very high-resolution sensor, and then you filter out the high frequencies, then you loose all the benefit of the higher resolution image.

So what this means is that there is pretty much no way of telling whether the images we showed you from the Dragon sensor are high resolution or not.


Extreme light conditions and compression

Except, it is slightly more complicated than that. Some compression techniques might extract some detail (to help with motion prediction, edge-enhancement etc) before the overall image is softened.One of the other stand-out features of the Dragon sensor is its wide dynamic range. It's supposed to be able to deal with extreme lighting conditions better than practically any other sensor.

This, again, presents difficulties for compression, because there's no way you can accurately represent a 15-stop dynamic range with an 8 bit codec, although, in a way, you can.

The point here is that you probably wouldn't want to represent all of those lighting conditions anyway, because your display device wouldn't be able to cope with it.
So what you have to do, either in-camera or via grading, is to map the range of lights and darks that you want to appear in the final image, into the range available for typical display devices, and into the number of levels per colour channel allowed by the codec. This is practically the same as you have to do with explicitly High Dynamic Range (HDR) photography - a species that is directly threatened by the new and extremely capable generation of sensors (and processing that inevitably surrounds them).

Don't forget that you can represent any dynamic range with any number of bits greater than around two. It's just that the more bits you have, the fewer contours you will see between adjacent colours. This can be easily disguised if there's a lot of detail, or it can be painfully obvious in the case of gradual gradients, like blue skies.


Can you make a reasonable judgement?

So, yes, it is reasonable to think that you can very approximately judge a camera's ability. You might even be able to see that very important aspect of how a camera handles light: the way it copes with the transition from detailed image to complete blow-out. It's this type of ability that might ultimately matter more than dynamic range itself - although if the effect is very subtle it might be completely masked by "quantisation" - the way that the "delivery" compression chops up the light levels into discrete steps. But don't forget that an amazing camera that is only capable of 2K or HD resolution might give similar results if it is able to handle the dynamic range as well as a Dragon )which it almost certainly won't). The message here is that using internet video is absolutely the worst way to judge the resolution of the sensor that captured the images.

Finally, it's worth mentioning that even with the same codec, you might get different results. Although Vimeo and YouTube's encoding is undoubtedly very clever, it is by definition a "one size fits all" solution. At the other end of the scale, if you have an expert who can finely tune the parameters at the time of encoding, and possibly do multiple passes, you will end up with a highly optimised piece of compressed media that will look far better than the "average" clips that you normally see would lead you to expect.

So, just remember that, with all due respect to Vimeo, who provide an amazing service, evaluating the performance of a new sensor through the pond-water of compression, is just about the last thing you really want to be doing - but for most of us, it's all we have, and it is definitely better than having nothing at all.

 

Tags: Technology

Comments