<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Here's a better way to get high quality black and white from raw images

3 minute read

RedShark Technical Editor Phil Rhodes profiles Monocle, a new utility from ClipToolz that converts raw color into a black-and-white result that approaches a 'true' monochromatic image.

It's been a while since we looked at the monochrome-specific version of RED's Epic camera, but the intervening emergence of things like Leica's Monochrom black-and-white camera, as well as Arri's introduction of a very small number of monochrome Alexas, suggest that there's a continuing, if small, interest in greyscale images.

Previously, if manufacturers were generous enough to print a colour filter mask onto the sensor in your camera, there would naturally be no access to the cleanliness of unadulterated black and white imaging. Now, however, Australian company ClipToolz has come up with a solution which gets us at least part of the way back to a true monochrome image. The approach is straightforward, relying on the selection of a single colour channel from a raw Bayer image and discarding the rest of the data. The process compromises overall resolution for the precision of a picture which doesn't rely on complex interpolation of the Bayer mosaic to recover much of its data.

A closer look at Monocle

This technique has a few interesting implications. First, the resulting image isn't panchromatic, at least not entirely, in the sense that in a panchromatic image, a given amount of light should have the same effect on the recorded luminance regardless of the colour of that light. Early photographic emulsions had similar behaviour, inasmuch as it was sensitive only to blue light, which is why darkroom safelights are traditionally red. Images processed with Monocle will render green objects most brightly, although the effect won't be as extreme as the complete red-blindness of orthochromatic photochemical emulsion because the green of a Bayer sensor's colour filter is often reasonably low in saturation.

Bayer's intention, after all, was for the green-filtered pixels to capture luminance information, in much the same way, and for much the same reason, that the Y channel of a YCrCb image is calculated from an RGB original with the green information weighted most heavily. The expectation of high green sensitivity is borne out experimentally. If we shoot a raw image of a Macbeth chart and process the result with Monocle, the green patch is, as predicted, the brightest of the three saturated RGB references at the left hand side of the second row from bottom.

macbeth in camera debayer

macbeth via monocle

This situation differs from the results obtained from a simple “desaturate” command in Photoshop on the camera's own JPEG interpretation of the raw data, which renders the RGB patches more similarly in luminance.

comparison

This situation does mean that it might be difficult to use coloured filters to control the relative exposure of things like skin, foliage and sky as one might on an unfiltered sensor or black and white film. It might be possible to do this to some extent, at the cost of very much reduced exposure of even unsaturated objects. Since picking only the green pixels results in a half-resolution image, we might choose to pick only the red or blue pixels for a one-quarter resolution image. The practicality of this naturally depends on the resolution of the original frame.

Resolution constraints also serve to make Monocle's approach more or less impractical for HD motion picture cameras, which don't record enough resolution to make it practical to throw half of it away. Perhaps the only system that might currently make this a reasonable approach is something like a Blackmagic Production Camera, which produces 4K raw files of sufficient resolution to produce HD output via Monocle. You could conceivably do it with F65, which records enough excess information, but at that level most productions would have access to a mono Alexa or Epic anyway. At the lower end, the raw output of something like a 5D Mk. 3 with the Magic Lantern software would lack sufficient resolution to reasonably be treated in this way.


Something to consider

The only potential downside, which is very much dependent on factors such as the the raw-to-RGB processing in use, is that throwing away pixel data might have the effect of increasing apparent noise, at least for the same output resolution. Taking a (say) 4000-pixel wide image and making it 2000 pixels wide by discarding the red and blue Bayer pixels doesn't involve any averaging. Taking the full 4K colour image and downscaling it does, and could reduce noise by a full stop in certain rather specific circumstances. This assumption is predicated on the idea that all of the 4K-wide colour image is made of real information. However, in the case of a Bayer sensor, this is not quite the case, and the outcome isn't easy to predict. In practice, the cleanliness and artifact-free nature of something that's close to a true and unadulterated monochrome-pixel image is attractive, without the slightly smeary appearance of noise that's been affected by a demosaic algorithm. Ultimately, this is a simple little trick that works quite nicely and takes advantage of modern, high-resolution sensors.

It'd be nice to think that we could opt for the red or blue channels too, for more pronouncedly orthochromatic effects, albeit at the cost of a lot of resolution. And naturally, we can only look to motion picture camera manufacturers to give us enough resolution to start deploying these techniques on moving images, too.

Tags: Post & VFX

Comments