<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

How has RED imaging changed since the beginning? 

5 minute read

RED

It's hard to believe that RED began over a decade ago. In the concluding part of his interview with Graeme Nattress, David Shapton discusses how things have changed over the years.

RED started planning to make digital cameras over a decade ago. I asked Graeme Nattress, RED's "Problem Solver" (which actually means Chief Colour Scientist, among other things) what has changed since then.

DS: When RED started planning its first camera, what was the minimum specification you were aiming for? What was your bottom line in terms of quality?

GN: At a minimum we had to be able to put an image up on a decent sized cinema screen and not be embarrassed. Even the earliest footage we shot on the “Frankie” prototype cameras still stands up well today. With REDCODE the aim was that in a similar critical viewing situation that we wouldn’t see obvious compression artefacts on complex imagery. Right from the start the resolution was to be 4K and the visual benefits of that decision were immediately apparent. I was personally very skeptical of a single chip camera for professional use, but once I started coding up the algorithms and getting a real hands-on understanding of Bayer patterns, those doubts went away. Once I saw the first imagery from the original Mysterium sensor, I knew the image would work and would look great.

DS: Just for comparison, how does that compare with RED’s state of the art cameras now? 

GN: Almost every area of camera performance has improved. With the original RED One, 4K was the limit for resolution, but now we have 8k and improved demosaic algorithms (as part of IPP2) that bring out more real detail (not fake sharpening) from the Bayer pattern image data. With the RED One you had to expose “correctly”, but with each new sensor dynamic range has improved meaning that you don’t have to be quite as careful or can use the cameras in much lower light without issues. Of course, all cameras benefit from the best lighting conditions and exposure….

REDCODE has been improved along the way, and being wavelet based it also scales efficiently with the higher resolutions we now offer with the MONSTRO and HELIUM sensors.

DS: How big a part did freedom from broadcast standards play in the genesis of RED cameras?

GN: With hard-wired electronics, broadcast standards were absolutely necessary, but as TV moved from being analogue images to computer data, such standards would, if followed, be a hindrance. And of course,  traditionally the highest quality image acquisition format was film which by its very nature was independent of broadcast standards and that’s why it’s such a great time to be a fan of classic TV where old shows shot on film can be telecined with the latest equipment and look better than ever. That philosophy follows through to RED where by recording the RAW sensor data it can be re-developed as the technology improves. During IPP2 development I was constantly amazed at how improved old RED One footage looks, and of course how fantastic the footage from the latest sensors appears.

RED with monstro.jpg

DS: One thread through all versions and iterations of RED's cameras has been REDCODE (RED's compressed raw format) but alongside this, there's RED's colour pipeline. How did your first colour pipeline compare with IPP2, which is RED's latest and greatest version?

GN: The first RED image processing pipeline was quite simple, which also facilitated the ability to run in the early hardware. One benefit of a simple pipeline is that simple image processing rarely goes badly wrong or produces the kind of artifacts that more complex algorithms can lead to. By having a simple pipeline I learned how to get the best out of a minimal toolset, but as the industry moves at pace toward HDR and a wide range of possible output displays, it was increasingly apparent that a system that embraced output independence and HDR was necessary. IPP2 uses the original pipeline (in terms of order of operations, but with improved demosaic and highlight extension algorithms) for the initial stages of RAW development, but adds on output transforms to allow that image to be adapted to any kind of display technology in terms of colour gamut or brightness capabilities.

DS: Is it true that you can apply the technology in IPP2 to the very earliest RED camera recordings and you will see an improvement?

GN: Absolutely. I took my early RED One footage (capturing one of my daughter’s first steps in 4K) and could see obvious benefits. I even went back and found some original Frankie footage and that worked too, although I did have to code up some new tools to load and parse the files. (Frankie was an early prototype RED camera.)

DS: Do you think we will see the same amount of innovation and progress in the next ten years that we saw in the last ten years? 

GN: I don’t see the pace of technological improvement slowing at all. I can certainly see that as display technologies improve we’ll be able to better appreciate the resolution benefits of RED cameras. 

DS: What do you think about this statement: "the more information we capture, the better the image will be”?

GN: It’s a statement that seems trivially true, but I think there’s more to it than that because there’s a balance of information to be captured and some information is (relatively) more important than others. And of course, it’s not just quantity of information but the quality of that information. And to add to the subtleties some information can be traded - for instance downsampling trades resolution for noise reduction, but you can’t go the other way.

RED_EPIC-W_with_GEMINI_5K_S35_lifestyle-hero_preview.jpg

DS: Do you think that if we capture at 8K (or more!), we capture details that are visible even at lower resolutions? Let's say we shot at 8K and viewed at 4K. Would there be more information in the image of 8K footage downsampled and viewed at 4K than if the footage were shot natively at 4K? 

GN: Looking at this simply, it would seem obvious that there are details that small enough to be unseen in a 4K capture and visible in an 8k capture that would once again become invisible upon downsampling to 4K. However, it could very well be that there are small details in the shadows that are obscured by noise in a 4K capture and that by capturing at 8k and downsampling they’re entirely visible in the 4K container. We  also have to remember that we’re dealing with physical cameras and we’ll be optically low-pass filtering avoid aliasing, and this means a 4K camera cannot capture all the  details which can be stored in a 4K container image. However an 8k camera can capture them, and by downsampling we can make a very “full” 4K image container and see those details which were lost not because of the 4K image container but because of the limits of 4K image capture. 

DS: Have you surprised even yourself with the quality of images obtainable today? 

GN: I’ve often been pleasantly surprised by the quality of images obtainable today. What I’m really liking in the latest RED sensors is that they have an aesthetic texture to the image noise which is so far from the objectionable “video” noise or the plastic look of a noise-reduced image. The high resolution of Phil Holland’s aerial work has such verisimilitude - Clark Dunbar’s 8k stock footage has incredible details and colour that knocks your socks off, and Mark Toia’s action, colour and attention to detail just grabs you with emotion.

Read the previous parts of this series.

1. RedShark interviews RED's Chief Colour Scientist Graeme Nattress. Part 1: introduction

2. Resolution: We discuss Pixel Power with RED's Graeme Nattress

3. Going Digital: How to preserve the essence of analogue in a digital world

4. The story of REDCODE: why it’s better now than it’s ever been

Tags: Production

Comments