<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Your phone shouldn’t be able to take photos like this, but it can

3 minute read

This year’s iPhone 12 release is showing just how significant computational photography really is.                                                                                                      

Lamborghini taken with iPhone 12 Pro Max. Image: Peter McKinnon screen grab.
Lamborghini taken with iPhone 12 Pro Max. Image: Peter McKinnon screen grab.

Phones have long since been able to take a decent picture, but it is both obvious and unavoidable that they have improved year on year, and they will continue to improve. The examples that are being published from the iPhone 12 Pro as reviewers get their hands on the phone are often remarkable. But on paper the device really shouldn’t be able to get anything like the results that it does.

Improvements in the tiny lenses used will help matters, as well improvements to sensor design. But the sensor in an iPhone, or any other smartphone, is still tiny. It really should not work as it does. The key to why the iPhone 12 can do what it does is processing.

A modern phone camera has some parallels to a modern fighter jet. A modern jet fighter like the F-35 Lightning is aerodynamically unstable. If it was only equipped with basic mechanical controls it would be unable to fly because it would be uncontrollable. Ironically the instability inherent in such a design actually makes the plane aerodynamically efficient and agile. But the only way these abilities can be accessed is by having a computer in charge of the control system. The pilot gives the plane control input, but the computer is working out how to make it perform the manoeuvre without crashing.

But the biggest difference this computer control system makes is that the plane can be made to do moves that simply could not be achieved in a traditional aircraft. If the pilot commands the plane to perform a barrel roll the computer takes the amount of control input as an instruction for a particular roll rate. When the pilot lets the stick go the computer interprets this as an instruction to stop the roll. It can do this instantly without any need for the pilot to input a counter movement to stop the roll. Not only that but the computer can ensure that the aircraft has zero yaw or slip during moves.

A plane shouldn’t really be able to do the moves that an F-35 is capable, yet computational flight makes it possible. It’s the same for the cameras in your phone, and it means that the quality of a phone photo, in the right circumstances, easily rivals that of a mirrorless or DSLR camera. It really shouldn’t, but it does.

It’s important for people for who photography is a large part of their lives simply because the camera on your phone is the one you are likely to have with you at all times. With telephoto lenses as well as ultra-wide lenses now featuring, the most common focal lengths are now becoming covered as well as the actual image quality increasing.

Pictures taken before you've pressed the shutter

The in-camera processing of images is now performed in many cases before you have even pressed the shutter. Focus has become much faster, and now nighttime photography has been made possible. The results that new phones are achieving in fractions of a second would usually take multiple exposures and a ton of Lightroom tinkering using even the latest dedicated mirrorless devices. 

These sorts of advances do not make everyone into a photographer. But for people who are knowledgable and skilled on the subject they do allow the user to create images on the fly that would ordinarily be unavailable to them.

I’ve said it many times before on RedShark that these days I very often don’t regret not taking my big camera with me on trips simply because I know I can get a really good image with my phone. The video below from Peter McKinnon demonstrates some results, which, if you were not told what they were taken on, you’d never guess. The depth estimation in portrait mode has become so good that for the most part you wouldn’t tell they hadn’t been shot on a mirrorless or DSLR. In previous iterations the fake depth of field effect was a but clumsy. It seems it too has been improved quite substantially.

You could use the argument that a modern smartphone costs the same or more than some decent dedicated cameras, and you’d be right, But it should never be forgotten that the price you pay for a smartphone gets you a device you can effectively run your life from. It’s a device that does a whole ton of stuff, which is why it’s important to compare apples with apples when it comes to price comparison. This is because you could very easily turn that argument on its head by saying that the camera costs as much as a smartphone but can’t get your emails or download movies, or play games, or do augmented reality etc.

Tags: Technology Editor Opinion

Comments