<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

How RED's Global Vision & Extended Highlights takes computational photography and runs with it

Extended Highlights is a beta feature of RED Global Vision in the new V-RAPTOR [X] and V-RAPTOR XL [X]
4 minute read
Extended Highlights is a beta feature of RED Global Vision in the new V-RAPTOR [X] and V-RAPTOR XL [X]

Remember when everyone was excited about Thompson’s Viper and its exalted eleven-stop dynamic range? RED now claims twenty stops with new firmware for its newest V-Raptors, albeit with some slightly unusual techniques, which the company points out are still in beta. Phil Rhodes explains.

It’s not the first time that RED has offered a feature intended to increase dynamic range. Some time ago, it experimented with HDRx, a technique that stacked exposures in much the same way as a high-dynamic range stills shot using two or three photos at different exposures, improving both highlight and shadow detail.

However, this time round, the new feature is part of a collection of things that RED calls Global Vision, in reference to the global shutter on some of the company’s cameras (although exposure stacking is not necessarily dependent on a global shutter camera).

Computational photography

Referring to this as “exposure stacking” might not be the whole story. There are already lots of cameras that do things like that to improve some combination of noise, sensitivity, and dynamic range. Most of those cameras are in cellphones and action cameras where there’s already a significant amount of digital signal processing available, correcting lens aberrations and providing stabilization and noise reduction. This sort of thing has been called computational photography, where pictures are not a direct result of reading brightness values from a sensor but instead a computer’s reconstruction of a likely scene based on those values.

Combining multiple exposures is an example. The computation might compensate for the fact that the multiple exposures won’t necessarily line up because objects in the scene may have moved between shots. In that case, the computation involved might seek to shift things around in each image to line them up. Red’s HDRx didn’t do that, and while the details of this kind of thing tend to be proprietary, we might speculate that the new features try to solve that problem in some way. The company points out that it takes more processing time on the post-production workstation.

This is also how things like Google’s face-unblur feature works on its Pixel phones, which gives us some insight into the sort of thing that might be possible. Stack enough exposures, and the result might include something that looks like motion blur based on how the camera moved during each frame or during the sequence of frames, depending on how it was computed. If one of those exposures is sufficiently short, though, and any region of the image could use just that one exposure and be sharp, of course, that part of the shot won’t enjoy the noise, dynamic range, and sensitivity benefits, but it’ll be less motion blurred. RED accepts that there may be some related artifacts associated with its new process.

So, we might soon be in a world in which any one camera might have several different levels of performance in different areas of a single frame, depending on what the person using it (or someone in post-production, or even, increasingly, an AI) thought was important. If there’s been any objection to this sort of thing, it’s that learning how to get the best out of a camera might be difficult because the performance of that camera might become a moving target. There’s a degree of purism (if not puritanism) involved in the idea that an artist’s tools should be somehow knowable in a way that computational photography might disturb.

This sort of dynamic range does create some interesting new issues, however. Look at the step charts published by RED and notice just how much glow there is around the brighter chips, even on what looks to be something like a DSC Labs Xyla chart. The chart makes the brightest test patches smaller in order to minimize the amount of light spilled from bright chips to darker ones due to lens flare. That might make the dark patches more visible than they otherwise would be, making the test inaccurate. Even so, all lenses inevitably produce some halation, and that becomes a bigger factor the more dynamic range is involved. Lenses are not things that we’re used to thinking of as actually having a dynamic range limit – but they do, at least in local regions of the image.

From cell phones to the high-end

It’s been hard to ignore the fact that cell phones have been enjoying a lot of the fruits of computational photography in ways that cinema cameras haven’t, and this is probably the first of many. Sensor designs of the 2020s do pretty well already, of course. Still, there’s always room for more, particularly more sensitivity in circumstances where power consumption is a factor, which is the case both at the battery-powered low end and the generator-powered high end.

Perhaps most importantly, there’s the issue of what we actually do with all this dynamic range. Recording more information than we need has always been a good idea, and shooting for demanding distribution formats such as PQ-based HDR, including HDR10 and its derivatives, as well as Dolby Vision, costs us some of the flexibility we enjoy when shooting for conventional distribution because the camera is no longer so much better than the distribution format. Gaining some of that flexibility back would be valuable, though grading 20-stop footage into a 7-stop deliverable, which will be required for standard-dynamic-range distribution for a while yet, will not be easy. It’s already tricky with 14- or 15-stop cameras, depending on how it’s shot.

So, let’s not get too used to shooting scenes with massively high dynamic range, flicking the monitor into direct mode, and saying, “Well, it’s held in the log.” At some point, someone has to rationalize the sheer range of brightness in the scene into a viewable and hopefully attractive image. Still, it’s hard to object to computational photography in general. Some people will interpret it as a sacrifice of sheer artistic purity, but, in the end, it’s allowed cellphone cameras to enjoy shockingly good performance for such tiny sensors, and that’s performance that film and TV production could certainly use. This won’t be the last example.

Tags: Production RED

Comments