What's the difference between log modes and HDR?

Written by RedShark News Staff

Shutterstock - Nejdet Duzen

It's easy to get confused between HDR and log. Here's a guide to how they differ and what it means for your production.

Modern cameras often have at least one recording mode called “log” or “cine.” There might be several different ones, and it's pretty widely understood that these modes are meant to produce an image that will look flat and grey on conventional monitors, leaving lots of room for adjustment later. That could be described as a high dynamic range recording mode, which is why it lacks contrast when viewed on a standard dynamic range display. The highest values in the picture are intended to represent very bright light, but the monitor can't emit that much light, so the picture lacks contrast and looks dull.

HDR displays are a separate concept to HDR recordings. Conventional displays are, according to the standards, capable of emitting about 100 candela per square metre. Many displays are actually brighter than that, since it makes them look subjectively better. Computer displays may be up to 600 which crosses over with the least bright HDR displays. “True” HDR displays begin at 1000 to 1500, and the very best, rarest reference monitors may achieve 4000. Feed those with an appropriate signal and suddenly the flat dullness goes away, and the picture looks bright and punchy, with more realistic treatment of very bright areas.

Related, but different

So, HDR and log are related concepts, though the way they work is different. Usually, a signal that's been processed for log recording isn't suitable for direct display on an HDR monitor (Sony's SLog3 encoding is sometimes used in this way, but that's an exception rather than the rule.) Log recordings are usually configured so that roughly the same number of digital counts is used for every f-stop's worth of brightness information, so that shadows, highlights and everything in between has the same number of brightness levels.

Understanding this means we have to know how f-stops work.

Open up the aperture on a camera, and each f-stop looks, visually to us, like a continuous brightness increase. Open up from f/8 to f/5.6, and the picture looks brighter. Open up again from f/5.6 to f/4, and the picture looks brighter again, by the same amount. What's actually happening is that the light level is doubling every time we open up a stop: if there's 100 photons per frame hitting the sensor at f/8, there'll be 200 at f/5.6, 400 at f/4, and 800 at f/2.8. It's done like this because we have approximately logarithmic eyes – we see successive doublings of light intensity as a steady increase in perceived brightness.

It’s about photons

Still, if we just encode the number of photons that hit the sensor literally, which would be a true linear encoding, then the brighter parts of our picture start to use really enormous numbers very quickly. Modern cameras may shoot a fifteen-stop dynamic range. The darkest stop of picture information might therefore be encoded as 100 brightness values. The next stop up, 200. The next stop, 400. By the time we've got to the fifteenth, brightest stop, there's over three million values per stop. These aren't realistic real-world numbers from a real camera, but they demonstrate the problem.

Encoding that as a conventional ten-bit signal, with 1024 values in total, is clearly impossible. We could use bigger bit depths, perhaps, but that darkest stop looks, to us, like the same amount of brightness range as the brightest stop, and should probably be encoded using the same sort of numeric range. Log recording takes advantage of the fact that the successive doubling produces a logarithmic curve. That logarithmic progression can be mathematically inverted, so we end up with a series of numbers that roughly represent how bright something looks, as opposed to how many photons hit the sensor. The shadows might get, say, 100 values per stop, and so do the highlights. Various manufacturers make tweaks to that mathematically-predictable approach, which is why there is incompatibility, but that's the general idea.

More efficient way to store brightness information

The result is a much more efficient way to store brightness information, and it's this which makes modern cameras – really everything since Sony's hypergamma settings of the early 2000s – capable of capturing images suitable for HDR finishing.

Before the advent of HDR displays, log recording was commonly used to provide more grading options for standard dynamic range finishing. When shooting for HDR, it's even more essential, as more of that highlight information will make it through to the final image.

That final image, though, is unlikely to be in the same brightness encoding as the camera original file. There are exceptions to this; a Sony camera in conjunction with a Sony display can sometimes use the SLog3 encoding for the recorded image and feed it unaltered to the monitor. It would be theoretically possible to do this with any log or cine encoding, by adding an appropriate lookup table to the monitor. For delivery, it's more usual to deliver one or more of the different encodings which might be requested by a distributor – PQ, HLG, Dolby Vision, and so on.

Creative grading

Getting from the camera original format to the final delivery format is, as ever, a matter of creative grading, regardless of whether that final output is HDR or SDR. It is possible to put material through a static lookup table (Essentially a fixed grade) and end up with HDR results that are watchable, just as effectively happens in a camera set to produce Rec.709 pictures for SDR. The common example of this is that some cameras can both record and output the hybrid log-gamma format, which is probably how HDR news will be shot. HLG may be slightly less capable than some of the other options, but simpler, easier to implement, and backward compatible with SDR displays. Sometimes easy of use trumps ultimate obtainable quality.

There will usually be a conversion process between camera material and the final deliverable, although given the right display technology, that's a job that can be done by most grading facilities. As with any new technology, standardisation of both camera and delivery formats will come in time, but meanwhile we can rely on the fact that well-shot material, created with an eye on proper highlight exposure, is likely to be suitable for finishing to either SDR or any of the current HDR standards.


Tags: Production


Related Articles

2 August, 2020

This is how the first DV cameras changed video production forever

The 1980s were the decade when video began to encroach on film – certainly for TV, if not for cinema. The 1990s was the decade when digital cameras...

Read Story

1 August, 2020

This is one of the biggest influencers on modern video you might not have heard of

If you’ve started using cameras in the last few years you might not be aware of just how far cameras have come. For some time one of the go-to...

Read Story

31 July, 2020

Why do we keep thinking in 35mm for focal lengths?

Replay: Do we really need to keep using 35mm as our baseline for focal lengths, or is there a much better way?

Read Story