RedShark News - Video technology news and analysis

Be careful comparing cameras (some things are not obvious!)

Written by David Shapton | Sep 16, 2015 2:00:00 AM
Be careful comparing cameras. Panasonic G4 and Panasonic Varicam

Some useful advice post-IBC: Two cameras that might be similar on the surface might be based on fundamentally different technology. Make sure you're comparing like with like.

I recently went on to a forum for still camera users, and found there was a lot of discussion about video as well. You'd expect this because it's hard to buy a digital still camera that doesn't support video capture in some shape or form. Some of the participants were obviously new to video. I could see from their posts that they were being tripped up by some video camera concepts that they hadn't fully grasped yet.

And it reminded me that you have to be extremely careful when you're comparing two cameras - even if they look similar and come from the same manufacturer!

If you're used to still photography, and haven't played much with video before, then it's not surprising if you find that video concepts are new and sometimes strange. And that means that you have to be extremely careful when you're comparing cameras, as we'll see later in this article.

First let's look at some of the differences between video and stills cameras. Some of this is obvious, but quite a lot it definitely isn't.

Making pictures that record an instant in time is a very different process to capturing a continuous segment of reality, especially when you consider it entails recording not just a single very high resolution picture but as many as thirty or even sixty of them every second.

Video is typically recorded at a lower resolution than still images. HD video is only around 2.5 megapixels. It's likely to have a fixed aspect ratio. You frame it horizontally, whereas still pictures can be taken vertically and cropped dramatically in any shape or size.

Noise in still images forms a static pattern. In video it moves, and can either be annoying or pleasing, depending on its quantity and nature.

And the whole artistic process is different between stills and video. Knowing how to compose an image is an easily transferrable skill, but planning what takes place within a shot and then editing multiple shots into a finished work is completely new territory for still photographers, as is dealing with video workflows and managing all that data - hundreds of times more than is required for static images.

If it's a big leap from taking stills to making videos, then it's an even bigger step to understanding the performance and specifications of video cameras, especially because there are significant differences in the way that video is captured from camera to camera. This can make it hard to know how to compare two cameras side by side because if you look at it the wrong way it can lead to inaccurate conclusions about the quality of the devices.

It's much simpler to compare two dedicated video cameras, although this is no longer always the case. The problem is that when the sensor has a higher resolution than a standard video frame size, then the lower video resolution has to be "derived" from the greater number of pixels in the physical sensor. How this is done will affect the quality of the video.

If the camera has a dedicated video sensor, and it's a 4K camera, say, then the sensor will have a 4K array of pixels, and one pixel in the sensor will correspond to a single pixel in the video image. But even in dedicated video cameras, we're starting to see higher physical resolutions in sensors than the video they output. Sometimes this is so that a higher quality video image can be "downsampled", or it may be so that the same camera can capture high resolutions stills at the same time as recording video.

There's also the rather technical matter of "DeBayering" images - the process by which a colour image is derived from a black and white single sensor using an overlaid colour filter matrix, and processing the results to create a full colour picture. This inevitably leaves an image that has a lower resolution than a sensor itself so it does mean that it is an advantage to have a higher number of pixels in the sensor than in the final video image if it means that you can have a "true" HD or 4K image as a result.

The more sophisticated cameras get, the harder it is to compare them directly. You almost have to resort to looking at the images and saying “that’s nice; but that’s not so nice”, which if you think about it, is probably not a bad thing at all.

The very latest cameras from Sony have such extraordinary specifications that they can utterly confuse people when they try to compare them. I noticed another discussion on the still camera forum where a poster was on the point of returning his A7R II, because, as far as I could tell, he wasn’t aware of how good it was. (The increased weight was one issue - hardly surprising with a camera that packs so much in - and it was only heavier by less than an ounce than his previous camera - and he thought it was noisier too - we’ll have a look at this misconception in a minute).

The Sony A7S came out last year and blew us away with its low noise performance. With this camera, Sony took a very conscious decision to create a sensor that could take great pictures in very low light. That was the priority and other specifications like resolution came second to that. It turns out that this was a great choice because the camera is able to perform in light that would be completely unyielding to other cameras. The sensor has “only” 12 megapixels. But that’s fine for 4K video, which only requires 8 megapixels, and even for still photographs, given that eight megapixels gives a very acceptable still image. (Co-incidentally, the very first Canon 5D had a sensor with just over 12 megapixels and no-one complained about the image quality at the time).

If you compare the Sony A7S with almost any other camera, it will have a lower noise floor in any low light situation. Having fewer pixels means that they can be bigger, that they can capture more photons in a given time, and that they will give the camera a very large number for an ISO rating. This is exactly what a lot of photographers and video makers want. I was sceptical about the need for low light cameras until I tried this one, and found myself getting good shots in light that was unacceptable for anything, never mind movie making.

Fast forward just over a year, and we have Sony announcing the A7R II. This is a camera with almost four times the A7S’s pixel count, giving two times the linear resolution. It’s an absolute beast of a camera and while it still has an issue with rolling shutter, it is able to record 4K internally, either subsampled from a super 35mm crop or derived through intelligent pixel binning from the entire 42 megapixel sensor.

With two such different light gathering elements, it’s always going to be hard to make direct comparisons. Each sensor has its own advantages, none of which are negated by the existence of the other. Real problems happen if you don’t realise what it means to have two sensors with different resolutions.

Here’s an example.

When you want to show an image from a camera on the web, you will have to shrink it, or crop it so that the pixels in the image map on to the same number of pixels on the computer scree. This is called a one-to-one crop. I’ve used it myself, because it’s a very good way to show how well a camera performs. For example, I prefer to show still images from high end cameras than video, because it avoids heavy compression. So I tend to show first the whole frame, reduced in size of course, and then a one-to-one crop, showing every pixel in full, but only, of course, from a small part of the picture.

This method is fine, and perfectly valid for demonstrating the output from an individual camera, but it falls down completely as a means to compare two cameras, because if you compare a one-to-one crop between two cameras with completely different resolutions, then you are not comparing like with like.

It’s easy to see why (albeit with hindsight!).

For any given size of image, the A7S will use a certain number of pixels. For the same size of image, the A7R II will use four times that number. But if you crop the original image one to one, the A7R II will only show you a quarter of the size of the A7S’s image. The pixels on the A7R II’s sensor are smaller. That’s a fact of life. You can’t have four times the number of pixels in the same space without them being smaller.

So of course the advantage that you can have sharper pictures (lenses permitting) also becomes the disadvantage that each individual pixel will be noisier. In normal circumstances this is mitigated by the fact that there are more pixels to make up the image, and in a sense having so many pixels means that the image is “supersampled” relative to the A7S’s pictures. But each pixel will generate more noise than the A7S’s pixels. The noise, quite obviously, will look “smaller” in the images.

So if you then compare four of the A7R’s pixels with four of the A7S’s much bigger pixels, you will effectively have zoomed into the original image by a factor of four. This is going to throw an unfair spotlight on the A7R II’s image because you will effectively be looking at it with a much greater degree of scrutiny. Quite simply, this is an invalid comparison.

I’ve laboured this point here somewhat because it’s a clear one. There are other less clear but ultimately misleading examples and my point is simply that we need to be very cautious when comparing radically dissimilar cameras (especially when they might have similar-sounding names and may even look almost the same).

Ultimately the only conclusion you should draw is your own one. Always try the cameras in a variety of conditions, and make sure it works for you. Do this before you buy it. This will help everybody.