One day — soon — it may be possible to take almost perfect photos and videos with less-than-perfect lenses
Digital processing has been around for a long time. Photoshop was first released in 1988, but even before then, the mathematics to blur and enhance video was known for decades. Now, we're starting to use the abundance of digital processing that we have available to us today to correct images from sub-par lenses, and even to optimise - and almost perfect - lenses that are very good indeed.
In fact, some industry insiders are now saying that built-in lenses can compete with removable ones on the grounds that the correction built into the cameras is based on a complete knowledge of the lens, and this is something that you simply can't do if you allow any lens to be used, because being able to treat the lens and sensor combination as a "closed" system is always going to give you the best chance of correcting lens defects digitally.
Incredibly, some of the digital processing techniques that are in use in today's studios and edit suites were invented in the 1920s and 30s. They were only theoretical then, and it was only in the 60s and 70s, in forward-looking research establishments like IRCAM (Institut de Recherche et Coordination Acoustique/Musique) that computer music composition programs started to produce real results - even if it took a week of number-crunching on those early computers to generate a few seconds of synthesised sound.
The rate of progress is truly incredible and - just as predicted by Raymond Kurzweil in is seminal book "The Singularity is Near", we are starting to see things happen that just a few years ago would have seemed impossible.
The slide rule was in use for over three hundred years until Hewlett Packard released the HP35 scientific calculator in the mid seventies.
What has happened since then? This: our ability to calculate with a hand-held device has increased in the space of only forty-one years by several hundred million times.
The slide rule is just a single poignant example of the kind of change that surrounds us. There are dozens of other instances of this type of off-the-scale rate of progress.
So, what examples are there of things that were supposed to be impossible? And where is all this taking us? How about this for example (it's software called Melodyne that "unmixes" music). And of course there's Photoshop's Content Aware Fill, which somehow "generates" new material to fit in - often seamlessly - where an image has been stretched or an object removed.
Hard to take in
In digital signal processing (that means audio as well as video) the amount of sheer computing power available is hard to take in. What would have taken a room full of servers ten years ago, and would simply have been impossible twenty years ago, is now available in a portable device that you can hold in the palm of your hand. Companies like Altera and Xilinx make chips with over three billion transistors in them. My family's first-ever colour TV, back in the seventies, had only 63 transistors (I counted them, in the service manual!).
So, if you can build a colour TV with only 63 transistors, think what you can do with forty-seven million times that number!
And that's ignoring the advances in software and communications as well.
That's enough processing power to take an incoming SDI feed of 1080 video and compress it into ProRes and store it on an SSD, in real-time. There are even bigger processors than that now and the trend shows no signs of slowing down. In fact, as predicted by Kurzweil, it's speeding up.
And now, we are starting to see things that really did look like magic a few years ago, and one of them is the subject of this article: digitally correcting lens aberrations so that you end up with a better image that you could expect through optics alone.