What is really going on with Sony’s Clear Image Zoom?

Written by David Shapton

RedSharkAnd a camera that can tell the difference between them

Sony’s proprietary digital zoom technology, Clear Image Zoom, has a cloak of mystery surrounding it underneath of which there could be some very interesting things going on indeed.

I was talking to some very helpful Sony staff about the new Sony FS5 last week and part of our conversation gave me a distinct feeling of Deja Vu. There’s something about recent Sony cameras that nobody talks about much, but which I’m now pretty sure is not only not a myth, but is a very important development in video technology, sitting there, right in the midst of these very popular cameras.

What it is may surprise you (and I apologise for this sounding like a perfect clickbait headline!).

The technology concerned is Clear Image Zoom, which at first sight, uses clever but slightly less-than-exciting digital signal processing to digitally zoom into an image that is already at the limit of the optical zoom provided by the lens.

Digital zooming is traditionally - and rightly - frowned on by professionals because almost by definition it results in a loss of quality. With digital zooming, you’re deriving a zoomed-image by essentially reading between the pixels. This “new information” isn’t really information at all, because it is based on everything that is going on around it. There could be stuff in the real world that wasn’t captured by the original pixels, and it won’t show up in the digitally zoomed image either.

But from what I’ve heard - from two sources now within Sony - there is more going on that simple digital zooming. There must be, because Sony claims that Clear Image Zoom gives you almost the same quality as the original.

Here’s how Sony itself defines Clear Image Zoom:

"Clear Image Zoom is a function that uses the Sony® exclusive By Pixel Super Resolution Technology. It allows you to enlarge the image with close to the original image quality when shooting still images. The camera first zooms optically to the maximum optical magnification, then uses Clear Image Zoom technology to enlarge the image an additional 2x, producing sharp, clear images despite the increased zoom ratio."

Well, not much insight there. It’s essentially saying “We produce better images using Clear Image Zoom”. It doesn’t explain how it works.

A little more research produced this explanation on a forum:

"Clear Image Digital Zoom: the processor compares patterns found in adjacent pixels and creates new pixels to match selected patterns, resulting in more realistic, higher quality images"

But even this isn’t even hinting at the true story.

It seems that this is about as far in print as Sony is willing to go with its explanation, and yet, I don’t think the real technology behind this technique, which is the really exciting bit, is a secret. It’s just that it’s not massively in the open yet.

What actually lies powers Clear Image Zoom technology is as surprising as it is impressive. There is a database of common objects held in the camera, and the processor uses this data to recognise objects in the scene, and to help it to zoom more accurately.

Let’s say that the camera recognises that the object in a scene is a bicycle and not, for example, a fish. It therefore knows that the two round objects are wheels with spokes. Knowing this, it can recreate the scene, based on what it thinks it is, more accurately, at a higher level of zoom.

You probably realise that none of this is trivially easy. In fact, to do this, you need elements of artificial intelligence and machine vision. It’s not the sort of thing you’d expect to find in a camera.

Or maybe it is, because, here’s the thing.

Our brains work like this. They don’t work with pixels. They work with objects. Can you see a similarity here?

Well, I don’t want to read too much into this Sony technology. I’m convinced that before we have 8K in the home, we’ll have some other sort of video technology that doesn’t rely on ever-increasing pixel counts, and that it will use our own brain’s perceptual database to create images that are brighter, sharper and more brilliant than anything we’ve seen before.

If that sounds like science fiction, well, how likely did you think that a Sony camera that’s on sale now could tell the difference between a fish a bicycle?

(I just wanted to say that all of this is based on snatched conversations with Sony employees. I can’t be sure that I’ve got it completely right. But let’s see. I’ll try to get the full story on this as soon as I can.)

Tags: Technology


Related Articles

3 July, 2020

Frame.io: What is the future for remote workflows?

Frame.io’s online Workflow From Home series, hosted by the company’s Global SVP of Innovation, Michael Cioni, has been a definitive look into how to...

Read Story

30 June, 2020

WWDC20 - macOS Big Sur and iOS 14 grow closer

While the Intel-ARM transition was the blockbuster news of last week's WWDC20, it was far from the only news with significant steps forward for...

Read Story

30 June, 2020

Metalenses: Is the future of the lens flat?

RedShark Replay: A new report in the journal Science points the way to a new lens of the future that could revolutionise optics entirely.


Read Story