Bigger pixels give brighter pictures, but more pixels give more detail. So, which is the better approach when it comes to choosing new kit?
I’m writing this because I’ve spent the weekend wondering about it. I think I know the answer, but I’m by no means certain. The question is: which is better — fewer, larger pixels or more, smaller pixels?
The reason I’ve been obsessing about this is that I am currently using a Sony A7S II with a G-Series wide-to-telephoto zoom. It’s a wonderful combination and I’m very pleased with the pictures I’ve been getting. The Sony famously has a full-frame sensor with only 12 million pixels. That’s low by today’s standards, although still way above the eight million needed to shoot 4K video. This specially designed sensor has relatively large pixels, which, if you think of them as buckets, can capture more ping-pong balls (forgive my chunky analogy for photons) than more, smaller pixels in the same space, leading to a quite astonishing low light performance.
I’m mostly a still photographer but I do shoot video from time to time and this combination of camera and lens has proven to be quite outstanding for my own needs, almost (and perhaps actually) surpassing my previous favourite, the Canon 1 DC.
You can see from the above that I’m definitely not a mainstream cinematographer. But I am, as the editor of RedShark, in frequent contact with people who are, and I’m always fascinated to hear the views of those who practice at the sharp end of the movie-making industry.
I also follow companies like RED very closely. I’m fascinated and hugely impressed by recent developments like the Helium 8K sensor, and I’ve always been a fan of extremely high-resolution video. There’s no doubt that the increased information available in high-resolution video can lead to better images. Better, even, than you might think from the “Apple theory of retina displays”, which is that as long as you can’t see individual pixels then there’s no need for a higher resolution. That’s my understanding of it, anyway, and I think it is quite wrong.
It’s wrong because there is more to the way we perceive an image than the density of pixels.
Perceiving an image
Here’s an example. Let’s say you’re watching an HD-display and you’re far enough back not to see any individual pixels. Does this mean that everything on the screen will seem perfectly smooth and natural? Not necessarily. Certainly, vertical and horizontal lines will look practically perfect. And of course, large areas of colour will be fine. But this might not be the case for diagonal lines and curves. No matter how small the pixels, if you have a line that is a few degrees from horizontal or vertical, there will be visible stepping; you might want to call it “aliasing”.
Imagine a line that’s 1000 pixels long but is offset from horizontal by a distance of over one pixel. That’s a very small angle. Assuming the line is in focus and subject to all sorts of digital processing that might make this less obvious, then what you’re going to see is two parallel lines, end to end, each 500 pixels long and offset by one pixel in the middle. Everywhere in the image where there are non-vertical or horizontal elements, you’re going to see this effect to some extent. The point to take away from this is that near-but-not-quite horizontal or vertical picture elements can magnify the pixel structure of an image, making visual artefacts where other objects don’t. There is no real cure for this apart from more pixels. Unsurprisingly, 8K suffers from this phenomenon too, but much less. 16K would be even better but let’s not go there — not yet, anyway.
Going back to the Sony: 12 megapixels doesn’t sound like a large number compared to 8K, for example, which needs 32 megapixels but don’t forget that we’re talking about an area here and not linear dimensions. You actually need to take the square root of these numbers to compare them and if you do, the difference isn’t really that great (3.5 vs 5.6 roughly). I’ve certainly found that shooting with a mere 12 megapixels on the Sony is not in any way a limitation unless you want to zoom in on a still picture to a ludicrous extent.
I’ve enjoyed taking pictures and shooting with the Sony. It’s great to know that almost whatever the light, you’re going to get a picture, and probably a good one. There are some that you simply couldn’t take without a camera that is happy with very low light. I’ve taken pictures that have a kind of cheerful luminosity that I haven’t seen with other cameras, and I’m sure it’s because of the enormous number of photons that these giant pixels (remember that they’re on a full-frame sensor — not a Super 35mm one).
8K for video
But on the other hand, 8K footage from a RED Helium looks amazing as well (subject, of course, to a really good lens). There are all sorts of arguments for using 8K, like the ability to zoom in and recompose a shot in post-production. And like the way that when you downsample from 8K, it can make 4K look even better. But, ultimately, there is no other way to get a picture with as much detail. That’s where the lack of aliasing comes in. It’s important. If I were shooting a movie and I had no financial or technical constraints, I would want to do it in 8K.
So how is it that some of the best pictures I’ve ever seen have been from a camera with only twelve megapixels and yet the sharpest, most detailed video images on the planet come from 8K cameras?
I think it’s because the two approaches both lead to an increased image quality, but via different means.
The other day, I was photographing some old family pictures in an album. I didn’t have a scanner at the time so I had to shoot them in some decent lighting with a telephoto lens. It worked well enough. The pictures were mostly a bit faded, so I worked on the levels and curves and carefully increased the overall contrast. I did thiswith and without adding sharpening, but, nevertheless, the even the pictures without sharpening looked sharper! I tried this several times and the effect was consistent. I even asked a few people which images were more in focus and they all chose the contrast-enhanced ones.
So this is at least one sense in which fewer, better pixels can be helpful if those pixels are able to capture more light, and if there is more meaningful information per pixel.
With 8K, there is no shortage of detail. It is likely that less light will be captured but the overall effect is very good. I suspect that there is simply so much information in a 32-megapixel video frame that it’s possible, with processing, to extract a superb-looking image that is sharp and yet vibrant.
I’ve only really just started to explore this in my own mind. What do you think?