Computational Imaging is coming and will change the face of photography and video

Written by David Shapton

LightThe Light L16 was the first commercially available computational stills cameras

All digital imaging is computational to some extent, but it's going to mean much more than that in the near future. 

Let me explain.

The process of making a digital image involves much more than reading a "painting by numbers" value off a camera sensor. For a start, sensors are not digital devices. They're strictly analogue, in that their job is to produce a current that's proportional to the light that lands on them. In this sense, they're no more digital than a banana. 

So the next stage is always to digitize this voltage. That's when the number crunching can start. 

And it needs to, because what comes off a sensor is nothing like the final picture. It needs all kinds of processing to even make it look like a recognisable image, never mind a good one. 

We won't go in to that here, but it's probably sufficient to say that this is a tricky and highly evolved process that involves complicated mathematics and a huge amount of experience, trial and error. 

But camera designers are used to this and the results that we see today are remarkable. Top-end cameras will produce images that are equal or better to film in almost every measurable way. Cheaper and smaller cameras can make pictures that are way better than you'd expect. Even phones - actually, I should say especially phones - can be responsible for images that will take your breath away. And this is where we're starting to see the biggest benefits of computational imaging.

The phone comparison

Phones are, let's face it, not exactly the best shape for shooting photos and videos - although they do have a rather handy EVF (Electronic View Finder) on one side. But they are ideal for computational imaging. And this is it in a nutshell: if you combine extremely powerful processing with an imaging system that's constrained by size and shape, you get the perfect use case for using a computer to improve an image. 

And this is exactly what modern phones do. They can improve noise and combine multiple images from the same lens to make a picture that's very much better than could have been achieved otherwise. Combining images reduces noise - because you can average the same pixel across several "frames". 

Even this is more complicated than it seems, because in order to combine images, they have to be aligned, and that's not likely to happen with a handheld phone. So the next stage is to find corresponding pixels on consecutive pictures and move and crop one or the other until the scenes match completely. 

Even though this is quite laborious, modern phones do it in the background so well that we don't even notice it. We just see a surprisingly good picture. 

Beyond this, phones are starting to use AI and Machine Learning to improve the lighting in a scene after the picture's been taken. It's almost as if the system compares your picture with its own "knowledge" of how an ideal picture should look, and adjusts it to make it closer to the idealised image. 

Depth calculation

Cameras with multiple lenses are able to provide even more information to the computational imaging systems, so they'll allow an artifically optimised depth of field, as well as even more resolution as the images from each of the lenses is adopted into a final "composite" image. 

What's striking is that we're only just at the beginning of the era of computational imaging. To take things to an extreme length, it's theoretically possible that if you know the optical transfer function of a beer glass, with enough processing power and resolution in the sensor, you'll get a good image from that too. 

More realistically, the more information you can present to a computational imaging system, the better the potential results, and there are almost no constraints on how that information gets there. It might be from multiple cameras, or multiple lenses on the same cameras. Pretty soon, with the help of AI, we're going to be able to accurately and convincingly be able to move the camera position in post production. It's just a matter of interpolating between multiple camera angles. You can't do this with conventional calculations alone: it would need the help of an AI system. But that's exactly what optically-trained AI systems will be able to do very soon. 

We're on the threshold of not just one, but several revolutions in imaging, all of which are bigger than anything we've seen before. 

So, there's one unavoidable question: should lens manufacturers be worried? My feeling is not unduly. Artists - and people in general - love the feel of "things", and treasure the characteristics of tools that have character. Lenses aren't going to go away, but computational imaging will fill in the gaps in what we're able to do with imaging in ways that we're only beginning to see today.

Tags: Production


Related Articles

3 July, 2020 What is the future for remote workflows?’s online Workflow From Home series, hosted by the company’s Global SVP of Innovation, Michael Cioni, has been a definitive look into how to...

Read Story

3 July, 2020

Laowa's 9mm full-frame lens is one of the widest of its type

The new Laowa 9mm rectilinear lens, designed for full-frame cameras, is the widest lens of its type on the market.                               


Read Story

3 July, 2020

Lighting tutorial: How light a period drama on a low budget

DP Neil Oseman gives some effective tips on how to light a period drama when finances are tight.                                                    ...

Read Story