RedShark News - Video technology news and analysis

Before we all leap into 4K, maybe we need to understand resolution better!

Written by David Shapton | Feb 2, 2015 2:00:00 PM
Understanding Resoution

Here's another chance to read this article, the content of which seem to get more relevant with each passing day: "Megapixels" doesn't equal "resolution". Not by a long way!

We're going to be hearing a lot about image resolution over the next year or so. Not just because of 4K and 8K, but because still cameras continue their onwards march into the domain of professional video. And as they do, the resolution of their sensors, and how that is reduced to video "standard" resolutions, has a big effect on the ultimate quality of the pictures that they produce.

I'm going to focus on just a few points that I think are important here.

Linear resolution: pixels in a straight line

First, the resolution of a picture is not a matter of how many pixels there are on the sensor, but how many of them there are in a straight line. Don't forget that "megapixels" is about an area. 4K is only twice the linear resolution of HD, but it has four times the number of pixels. This is why it's so important to get all your 4K shots in focus and without any blur - because even half a pixel-width's movement or focus error will reduce your resolution to HD. There's an illustration of this later in the article.

Second, forget about comparing megapixels directly. Again, you need to look at the linear dimensions, which means your calculations about resolution will have to involve square roots.

For example, the Canon 1DX and 1DX have 18 megapixel sensors, while the new Sony A7S has only 12 megapixels. Sounds like the Sony has only two thirds (66%) the number of pixels. That's true, but it has four fifths (80%) of the resolution. In other words, if you crop only the outer 20% of the Canon picture, it will have the same linear resolution as the Sony. That's why 8 megapixel stills grabbed from 4K videos can look so good.

All of the above is true only for still images. For moving images, it's very different.

Moving images

How we perceive moving images is all about how much information we receive over a period of time. With stills, you can stare at the image as much as you like, but you won't glean any additional information.

But if you're watching video, you will get new information ever 24th, 25th, 30th, 50th or 60th of a second. That's significantly more than looking at a still image and it can work wonders for the perceived resolution.

That's if you're watching video of a scene without much motion.

If there is motion, and it covers the whole frame (like a horse race, for example, where both the foreground and the background are going to be significantly different every frame) then you will ideally need a higher framerate to see an "accurate" image. In fact, BBC research has for a long time said that the ideal framerate for high-action video is 300 fps.

This may seem like it's over the top, but the higher the linear resolution of your video, the more FPS you need, because otherwise motion blur, or, if it's a very fast shutter speed, jerkiness between the frames, can ruin the illusion.


Think about VHS

If you need a good example of the above, just think about VHS recording. This was, in comparison with video recording today, truly awful. But it was good enough in the absence of anything better.

To see how terrible it really was, all you had to do was pause the picture. Apart from the inevitable "control track" bar across the bottom of the picture, what you'd see is an unbelievably grainy image - a harsh glimpse at the real quality of the recording. But with 25 or 30 of these images per second, we had much more information to take in, essentially "averaging" the grotty, grainy pictures over time.

The same happens with high quality pictures as well.

Higher resolution brings its own problems. It demands better lenses and better technique. This latter requirement is especially important with 4K - to the extent that if you don't get your focus absolutely right, then you might as well have been shooting in HD - or even SD!

The funny thing is that seeing multiple frames in succession won't improve focus. You can "guess" more from a moving object that's blurred but if it's stationary, then it's going to look blurred in a still or moving image.

The trend towards using bigger sensors and DSLR cameras doesn't help in this respect. Have a look at this still image of my daughter from some 4K footage taken with a Canon 1DC.  It's a roughly 2 x crop of the original frame.

 

 

Now look at this one. It's only a few frames later. In fact, most of the clip was like this, and the only time it was correct was when the focus transitioned from being in front of the subject to behind. (Look at the her right eye).

 

 

The bottom line is that the way moving images seem to us us depends on the static resolution, the framerate (the "temporal resolution"), the amount of movement, the type of movement, and the amount of blur. New advances in technology are allowing us to add to this list, with wider dynamic range and colour gamut.

It probably depends on our eyesight as well!