<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

When resolution and sharpness become irrelevant

Resolution can sometimes be indefinable. Image:
4 minute read
Resolution can sometimes be indefinable. Image: Shutterstock

Replay: You can have all the numbers in the world, but camera resolution and sharpness often don't mean much when all other factors are taken into account.

It’s been at least seven or eight years since Sony started using the phrase “beyond definition” (if anyone knows when the slogan was launched, let us know, but it was certainly no later than 2013). That probably makes the company an early adopter of the idea that we’re now significantly beyond needing to peep pixels. If the limiting factor is no longer pixels, though, it might not be that clear why some companies still specify a pixel (or more accurately photosite) count for their material.

How we used to measure resolution

This was not always the case. Resolution, of anything, was traditionally measured in terms of modulation transfer function, MTF. The term sounds intimidatingly mathematical, but the idea is easy: take some black and white stripes and make them smaller (or move them away from the camera, maintaining proper focus) until they begin to blur into grey; think of it as turning a square wave into a sine wave. If we want a simple resolution figure, one of the things you have to decide is how much you’ll tolerate the line pairs being blurred together before you conclude that you have defeated the resolution of the system, so perceived sharpness and contrast are really very closely related. MTF is often expressed in line pairs per millimetre, but we can relate that to the photosites on a sensor if we know how many photosites there are and how big the sensor is, give or take the optical low pass filtering.

The introduction of the letter 'K'

MTFs are published for various film stocks, but traditionally they weren’t exactly headline news; people tested and made a decision based on what they liked the look of, an astounding and innovative notion that we’d do well to bear in mind. But because all of this stuff is far too complicated for promotional literature, a fashion was established for measuring digital camera resolution in Ks. It goes back no further than the early 2000s and Dalsa, which released the Origin camera at NAB 2003.

Previously, the letter K had been used to refer to film scans made with one red, one green and one blue sample per output pixel, but Dalsa raised eyebrows by using the same terminology to describe its Bayer-sensored Origin. We were not so much beyond resolution, perhaps, as beyond the letter K meaning what it once had.

Dalsa Origin camera
The Dalsa Origin camera.

So there hasn’t really been a very objective way to straightforwardly express resolution for some time. The people who pay for film and TV work do seem to have some interest in it because they’ve have been known to specify both sensor resolution and recording bitrate, as if the either is a fixed measure of quality. If anyone’s aware of a company specifying lenses, let us know in the comments, but so far it’s been rare to do that, and that’s where the wet gets in. The current obsession with lenses that are – ahem – “characterful” is typically going to lead to lenses with a lower MTFs. Considering that prominent online streaming services now demand 4K, but seem happy for everyone to shoot on a 1970s lens, wide open with lots of diffusion, it could be argued that there’s a little misapprehension of where the resolution limits now lie. Or, for some reason, people are scared of digital artefacts in a way they’re not scared of optical ones.

Diffraction, it's unavoidable

And there’s more to this than the standard lens concerns of corner versus centre sharpness, ideal stops, flare, and contrast (bearing in mind that contrast and resolution are so closely intwined.) It’s no surprise that these are becoming more of an issue as cameras grow more and more pixels, and some of the resulting problems are incredibly counter-intuitive. Something that’s becoming crucially important is diffraction; it’s obvious that using a lens wide open is begging for trouble, and most photographers are aware that using really small apertures can lead to softness too, but the sharper cameras get the narrower the margins for error become.

Diffraction in particular is a strange and subtle effect. It’s caused by interference between light waves as they travel between aperture and sensor. With a large aperture, there are many angles over which the light can approach the sensor, and they average together to create more or less the image we’d expect of a single point of light. With a narrow aperture there are only a few angles from which light can approach any point on the sensor, with little averaging, and interference between them creates a series of concentric rings called an Airy disc. Theoretically this always happens, but it only shows up in any practical sense at narrow apertures.

We now have cameras that fit 80 megapixels on a super-35mm sized sensor, so we don’t need to go quite as narrow as we once did. Conventionally, most people wouldn’t expect to hit diffraction limits until f/11 or more. An Ursa Mini Pro 12K is theoretically diffraction-limited at f/4.5, and while other factors step in quite heavily to ameliorate that (the colour filter array, the codec, other aspects of lens performance, etc) the basic mathematics highlight just how demanding cameras have become. Open up too much, of course, and we start running into aberration and loss of contrast for other reasons, and there’s not much space between f/4.5 and “opening up too much” to begin with.

At the same time, all the conventional ways to lose out on perceived sharpness still apply. Optical low pass filtering is a matter of opinion – it’s even sometimes interchangeable or optional – and nobody’s specifying that. Nobody’s specifying the maximum acceptable level of diffusion filters, mist or smoke in the air, or narrow shutter angles, aggressiveness of noise reduction, or the things that might affect that. We could go on and on.

If we’re going to maintain this resolution thing, the big broadcasters not only need to specify the sensor resolution, but also the lens, the OLPF, the stop everyone’s shooting at, and, er, not to jolt the camera around too much lest motion blur reduce visible sharpness below some critical threshold of assumed acceptability. And of course, that’s threshold that photochemical film was probably never capable of achieving, but which constantly occupies the top slots on all the streaming services because five-sixths of the history of this artform, and therefore quite a lot of the best ever stuff, were shot on it.

Sony, it seems, got it right.

Tags: Technology Opinion

Comments