17 Nov 2016

HDR fundamentals and techniques: what you need to know, part one

  • Written by 
  • submit to reddit  
This image has been used a lot as a demonstration of what remastering might achieve. It isn't necessarily, or even often, important for material to have been shot with HDR in mind. This image has been used a lot as a demonstration of what remastering might achieve. It isn't necessarily, or even often, important for material to have been shot with HDR in mind. Dolby / RedShark News

Index

In the first of a new multi-part series on HDR theory and practice, Phil Rhodes explains dynamic range in brief and the challenges in displaying HDR for production and distribution.

When we were in the process of replacing 35mm film with digital capture, the loudest voices always shouted about resolution, even though that wasn't ever really the biggest problem. You could, after all, tell the difference between higher and lower production values on formats as lowly as VHS. Even back then, although the term hadn't really emerged, we wanted better pixels, at least as much as we wanted more pixels. Now, with 8K widely discussed, we have more pixels than any reasonable human being could possibly need, with the possible exception of VR. At least in terms of conventional, fixed-screen production, though, the demand for those better pixels has finally started to receive the prominence that it probably always deserved.

Faced with the issue of highlight handling, camera manufacturers have been striving to improve the performance of their products for decades. The need to handle highlights better was recognised fairly early, in the context of CCD video cameras which recorded reasonably faithfully, up to a certain maximum brightness, then rendered either hard white or strangely-tinted areas of flat colour after that point, depending how the electronics worked. This created a need for better dynamic range – the ability to simultaneously see very dark and very bright subjects, or at least to have enough information to produce a smooth, gradual, pleasant transition from the very bright to the completely overexposed, rather like film does more-or-less by default.

Dynamic breakdown

Traditionally, we might expect the camera to record gradually increasing brightness, which suddenly reaches an overexposed level, and we might display that image like this:

When_there_is_no_gradual_transition_between_well-exposed_and_overexposed_areas_a_harsh_edge_can_be_visible_as_here.pngWhen there is no gradual transition between well-exposed and overexposed areas,
a harsh edge can be visible, as here.

With better cameras, we can record a wider range of brightness levels (shown at the top of the image below). The display is still the same, though, because it's easy to change cameras, but very hard to change millions of TVs. So, we can't display the entire range. What we can do is apply a Photoshop-style curves filter to smooth out that highlight transition, as shown at the bottom of the image:

The_wide_dynamic_range_of_the_camera_top_is_manipulated_to_produce_a_smooth_pleasant-looking_transition_to_the_highlights_in_a_standard-dynamic-range_image_bottom.pngThe wide dynamic range of the camera, top, is manipulated to produce a smooth, pleasant-looking transition to the highlights
in a standard-dynamic-range image, bottom.

Now, the blacks are the same as before and the finished image, at the bottom, reaches peak brightness at the same point as it did before. It does, however, look arguably, well, nicer. Smoother. Less harsh. When applied to a live action image, more filmic, perhaps. These images are somewhat approximate, given the vagaries of how computer graphics work on the web, but they demonstrate the principle.

In theory, this sounds straightforward and cameras have been doing it for decades. It does, however, open something of a can of worms because it plays (for the sake of pretty pictures) with a fundamental thing: the amount of light that goes into the camera, versus the amount of light that comes out of the monitor. If we were to design a television system now, we'd probably define that relationship very carefully, with the idea that the monitor should look like the real scene. In reality, that was never really a goal of the early pioneers of television, who were more interested in making something that worked acceptably than they were in that sort of accuracy.

In fact, that relationship wasn't really very well-defined, at least by anything other than convention, until quite recently. As we've seen, manufacturers have already been playing around with the way cameras make things look nicer, given how monitors work. What hasn't happened is for anyone to start playing around with monitors to make things look nicer given how cameras work. OK, that's not quite true; there are various standards which have an influence on monitors, such as the well-known ITU-T Recommendation 709, which we've written about before, but the fundamental capability is still broadly similar to the cathode ray tubes of decades past.

So, smoother, subjectively nicer transitions into highlights help. That's more-or-less what we've been doing, via grading of cinema material and various processing options functions. The obvious question, and the one that HDR seems to answer, is why we can't have this:

High_dynamic_range_imaging_ideally_involves_little_or_no_loss_of_highlight_information_and_maintenance_of_highlight_brightness_from_camera_to_display.pngHigh dynamic range imaging ideally involves little or no loss of highlight information,
and maintenance of highlight brightness, from camera to display.

Well, we'd have to make better monitors, which can achieve brighter peak whites.



« Prev |


Phil Rhodes

Phil Rhodes is a Cinematographer, Technologist, Writer and above all Communicator. Never afraid to speak his mind, and always worth listening to, he's a frequent contributor to RedShark.

Twitter Feed