Why still camera and video resolution are different

Written by David Shapton

Shutterstockshutterstock_329123837

With so many still cameras capable of shooting high resolution video, it's important to understand why video and still resolutions need to be treated differently

Sony's latest camera, the has a stupendous 61 Megapixel resolution. That's well into Medium Format territory (although I'd argue that for true Medium Format performance it's not just about pixels, but sensor size as well). It's nearly twice the number of pixels needed for 8K video.

But I'm going to talk about an era when that sort of pixel count would have sounded like science fiction. A distant age that was between fifteen and twenty years ago. That's a time period that wouldn't even register on a geological scale, but in technology, it's another time altogether. 

During this period, we've transitioned from analogue video to digital. Anyone who remembers video quality from thirty years ago is probably, like me, still astonished at the accomplishment and finesse of modern cinema cameras. Even consumer video devices (and, yes, that includes smartphones) can shoot video that blows away anything but analogue film from thirty years ago. 

DSC00002.JPG

Image from Sony Cybershot with approximately 2.5 megapixels (taken in 2002)

I've recently been looking at some still pictures I took with a Sony Cybershot camera from around 2002. It had a resolution of around 2.5 megapixels.That's pretty low for a still shot, but at the time it was OK because I wasn't using the camera for anything other than snapshots, and the advantages (no film, able to see the pictures immediately) were quite acceptable. Even at the time, though, it was obvious that the quality wasn't great. 

Back then I was just starting to see the first examples of High Definition. And I was blown away. The moving images looked stunning, even though we would probably look down our noses at them now, having become used to 4K and 8K video. It took me a while to realise that these pictures were roughly the same resolution as my Sony Cybershot images. 

So why did the HD video appear so much better than the still shots? 

Well, partly because the only HD equipment available then was very expensive and typically used great lenses. You can't expect the lens on a consumer camera to be in the same league as a Sony broadcast HD camera. But there was another difference, and it's one whose importance can't be underestimated. 

It's that HD video images move. Apologies for stating the obvious here, but this matters. 

Let's go back even further. Do you remember what it was like using VHS tape? By any objective measure, it was a terrible video format, with each frame boasting what in digital terms would be described as 0.16 megapixels. There was tape hiss. There was crossover noise. And yet it was watchable in the sense that it was better than nothing at all. 

Temporal Resolution.png

VHS simulation courtesy of RAREVISION VHS simulation app

But frame grabs from VHS were simply awful. They looked much worse than video. The reason is simply that if you show multiple frames, all subtly different (if only because of the noise in the images) then they will look better. The improvement comes from several things. First, you're giving the viewer more information. Even if it's a video of a plant in a pot, the noise in the image differs from frame to frame. This can average out errors and sometimes even add more apparent information though a process called dithering.

Even though dithering is a technique that's often used in digital recording to "blur" the gaps between the levels in the quantisation process, it can also help to uncover extra detail even in an analogue recording. It's hard to explain, but let's say that the camera's recording the image of a flower with a complicated petal structure. The resolution of the image will be limited by the resolution of the entire camera signal path. But because of the noise in the picture, it's possible that certain otherwise invisible elements in the flower might influence the noise pattern so that, over time, the extra details become visible. 

Whether or not this is the case, it's pretty clear to me that having 24, 25 or 30 images per second gives a much better perceived image than a single frame from that video ever can. 

And that's why 4K video looks far better than its 8 Megapixel resolution would lead you to expect. And 8K video, with its native resolution of around 32 megapixels, is simply astonishing. 

Tags: Technology

Comments

Related Articles

26 September, 2020

The rise and fall of the interactive cd-rom

The battlefields of the audio visual world are littered with the corpses of dead formats, none more so than the fight to put moving images on to...

Read Story

24 September, 2020

Tesla’s connectivity issue is a mere blip, not the road to hell

Tesla suffers a global network outage and the internet couldn’t conceal its schadenfreude. With customer’s suffering connectivity issues, such as not...

Read Story

7 September, 2020

Nvidia announces a 10,496 core GPU, the RTX3090

24GB and 10,496 cores? If you think the Nvidia RTX3090 sounds like a pokey GPU, you'd be right. And it'll be rather handy for video production.

...

Read Story