<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

This is a groundbreaking video!

4 minute read

Colby MooreCity in the World by Colby Moore

We're very excited about this video. It is filmed using HDR (High Dynamic Range) and gives the video a very distinctive look that is entirely appropriate for its subject matter: New York. And it may just be a very important moment in the history of film-making

The thing about HDR is that it's quite simple to do with still photography, but not so much with video. When you understand how it's done, it's easy to see why that is.

Our eyes are so good

The reason that we don't intuitively understand Dynamic Range is that our eyes are so good at coping with it. They've evolved into the most amazing interface between the world and our perceptual system. They work in bright sunlight, and in candlelight or even starlight. It's very rare for it to be so dark that we can't see anything.

But as we know, film and image sensors have a limited dynamic range. If you expose correctly for shadows, you will probably lose the detail in the highlights. And if you expose for a bright sky, you'll loose the shadows.

So in ordinary video, we loose an enormous amount of detail, unless we limit the detail in the first place to exist within a relatively narrow dynamic range.

Multiple exposures

In still photography HDR, wide dynamic range is recorded by using multiple exposures and then combining them using a process called "tone mapping", where the extremely wide range between light and dark is reduced proportionately to match the dynamic range of the display device (or print, if we're talking about "traditional" photography").

So, ironically, HDR is primarily a method to reduce dynamic range, but within that reduction, to retain meaningful steps between each original level of light.

There are only two ways to take multiple exposures. You either have to take them one after another (which you could do potentially with a high framerate video camera), but this would lead to mis-registration when the frames are combined if there is movement in the scene, which is the reason for taking video and not stills in the first place! Or you could have three or five cameras side-by-side, as close to the same viewpoint as possible. Of course, they never will be exactly at the same viewpoint, and this method obviously requires multiple, expensive, genlocked cameras. The situation would get ridiculous if you needed to film in 3D HDR.

Good news for video makers

But there is very good news on the way for video HDR enthusiasts. Camera sensors are becoming so good that they can almost capture HRD in a single exposure. And that's what Colby Moore has done in this video.

He's taken the RAW footage from a RED Epic, and exported it as two stops under exposed, and then again at two stops over exposed. Using Photomatix, an HDR processing tool, he's tone-mapped the video to give an HDR effect.

As you can see from his notes below, he's not claiming perfection with this technique. The darker parts of the footage are a bit noisy, and the noise exacerbates codec issues, leading to more artefacts than you would normally want; but the overall effect is absolutely stunning.

Unnatural?

HDR has it's critics. They say that it looks unnatural, or that it makes everything look like a computer game.

This is fair enough, but it's the wrong way to look at it in my view.

A better way to approach HDR video is to see it as a completely new medium, that is somewhere between video and painting. Whey you look at paintings, you're seeing a view of the world that has been interpreted by the artist. No-one complains that Van Gogh or Piccasso paintings are "unrealistic" because they're obviously not and equally obviously were never intended to be.

As a way of "interpreting" the world, and as an important - and now quite technically feasible - new "look", HRD has a big role to play in future film making.

Here's the video, and below, you can see Colby Moore's own notes on his methodology.

 

 

A short and creepy montage of scenes shot around the ever-photogenic island of Manhattan -- filmed entirely in high-dynamic range and comprised of some HDR Timelapse footage I shot, along with a collection of slow-motion and normal 24fps footage processed from Red Epic-X RAW video that I recently captured and then exported as -2,0+2 TIFF stacks to be tone mapped in Photomatix using a batch processing workflow. Please note that none of this was shot using HDRx -- only normal exposures from the camera post-converted into HDR using the traditional faux-HDR method of pushing and pulling the RAW file to create bracketed images.While HDRx is a powerful tool with a lot of benefits for shooting realistic looking extended dynamic range, I chose to steer clear of it this time in an effort to avoid the motion artifacts that come with it. Especially in light of the fact that I imagine those slight artifacts would have been particularly problematic when working with a more "surreal" method of HDR tone-mapping, as opposed to the more subdued and natural proprietary algorithm Red uses. Also, in this case, the goal was to show the added "pop" you get with HDR video when tone-mapped using a Photomatix detail compressing workflow, while trying to avoid going too far over the top and completely "cracking out" the image.Please note that my method admittedly has several drawbacks -- namely the grain from the pushed footage is a little excessive at times (a lot at others), and additionally, the push/pull limitations of the RAW file still won't allow me to capture the full dynamic range of an extreme lighting location like Times Square the way I can with DSLR bracketing of many more stops. Thus, billboards are still blown out in some of the shots -- just not as blown out as they would have been in traditional video footage. Additionally, unfortunately in an attempt to mask some of the excessive noise, I took some artistic liberties with noise reduction, and the overall sharpness suffers a bit in several shots. There are also some flickering issues, some related to the high-frame rates I shot at for certain scenes, and others related more to the processing of the HDR itself, since preventing the ugly halos associated with bad HDR is even more tricky with moving footage. I think I did my best under the circumstances, but there are a few shots where halos rear their ugly heads.To top it off, some of the high-frame rate footage was shot at a higher compression rate (and a few normal 24fps shots where I goofed), and thus the tone-mapped image really brings out some of the artifacts there too. I tried to keep that footage to a minimal, but there were certain shots that I liked compositionally that I chose to include anyway.Nevertheless, the idea here is to give you an idea of what can be captured with a workflow similar to this, as well as to hint at what might be possible once in-camera HDR technology improves to the point of capturing at least three exposures simultaneously without the added detriment of having to push and pull in post, which as stated before, adds quite a bit of grain.Thanks, and I hope you enjoy...

 

We've covered the topic of HDR video before in RedShark, here.

Tags: Production

Comments