<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

We can now calculate our way out of technology's limitations

3 minute read

LytroMuch of technology's limitations can be overcome with computational power

With even small devices such as smart phones becoming incredibly powerful video imagers, will sheer computational power make many of the limitations of our devices and cameras irrelevant?

Big chip cameras are all the rage right now. Despite the inherent winning practicality of an ENG style rig, with their often superior three-chip prism split sensor configurations, the desire for the large chip cameras shows no sign of abating. It would be right to say that for news, and some corporate applications, the ENG style camera is still king of the roost. At least if you want to be fast and effective at what you are doing. But for other applications, even for work that, traditionally, would have been suited to such cameras, the large chip options have now taken over. Regardless of their relative efficiency at getting the job done.

The effect of this change is quite startling. A quick look on sites such as CVP or B&H Photo shows that the options for ‘cinema’ style cameras and systems with large sensors far outweighs the available choice of ENG camera. The ranges have become much more focussed and the application or type of user who may want an ENG style rig is now considered fairly niche. In some circles such a camera is even viewed as being old fashioned. How the mighty have fallen, it would seem.

However, step change technological development is usually a game changer, an enabler and an equaliser. With the right information it is possible to simulate pretty much everything with a computer. With developments such as the Lytro Cinema Camera, we are on the cusp of being genuinely able to adjust our lighting in post. Indeed Apple’s new iPhones have AI algorithms that allow basic changes in lighting looks to be achieved in the post adjustment of photographs.

One thing that we can most certainly do already, even with a phone, is to adjust the depth of field after the fact, due to some of the current camera phones now capturing high resolution depth information.

Now, you don’t have to possess a great scientific mind to be able to see the greater application of such things. Right now the Lytro Cinema Camera is the size of a room, but it will get smaller. These things always do, and before you know it you’ll have one in your back pocket. Do not try to tell me that this won’t happen because it will be an argument you simply cannot win. The march of technology will ensure it to be the case. And sooner rather than later.

When it does, and high resolution depth information is as normal on every video camera as having an iris dial, the capture area of the light sensing chip in your camera may well become irrelevant. Completely.

Lytro-Cinema-Camera.jpg

The Lytro Cinema Camera may be huge now, but it will get smaller and the technology will spread

Ahh, but, what about those pesky issues of noise that plague smaller chips and the ability of larger sensors in most cases to be far superior in low light? Well, okay, you may well have a point, but only if we discount the ability for noise reduction to become ever better than it is now, including the ability of AI to become involved. Afte all, artificial intelligence routines can now recognise objects. DJI's drones for example knows what a person looks like, as well as what a bicycle looks like among many other objects, and so a camera system utlising such techniques will be better able to distinguish between what is an object, and what is noise. Theoretically depth information could also be utilised for noise reduction purposes, too. After all, with depth information you have a hard and fast record as to what most definitely is an object in the scene, and the details contained within, so they can be differentiated from sensor noise. The only caveat is that depth information cannot tell you about lens flares for instance. But then this another reason why AI can become involved, and different noise reduction techniques combined into one powerful whole.

While all of this does not necessarily mean that a camera with a chip that can fit inside a GoPro will take on Hollywood, it does mean that camera design can become much more flexible, especially at the lower end of things. Action cameras will be able to offer out a much higher quality picture, perhaps using computational video from multiple lenses. You could have a 2/3” 3-chip camera for excellent colour reproduction, with the ability to have low noise and extremely shallow depth of field should you need it, all while being able to use extremely flexible zoom lenses.

Perhaps, if and when light field technology becomes small enough to use practically in a video camera, sensor sizes and other specifications will simply become an antiquated aspect of the past. Already the Lytro system has shown that you can make motion blur and frame rate a decision that you take in post production. I have felt for a long time that our obsession about sensor types on cameras is to focus on something that probably only has a limited life span, and that other, better ways of imaging will give to us a step change similar to the move from valves (tubes) to semiconductors. Matrix style direct cerebral cortex injection aside, we still haven't exhausted the possibilities for 2D display just yet.

The fact is that with ever greater processing power and data throughput, we can actually calculate our way out of many of the limitations that our equipment places upon us. We have already seen the start of this. We can only imagine where it will end.

And to whet your appetite for more, take a look at Lytro's impressive demonstration of what its camera can do below.

Tags: Technology

Comments