<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

(Mis)understanding crop factors

4 minute read

RedShark Replay: Crop factors confuse people - especially when marketing is involved!

A few days ago, our esteemed editor brought to my attention a video by Tony Northrup, a luminary of the digital photography world, in which he discusses the issues surrounding the way that we consider the complex relationship between lenses, sensors, and other aspects of digital photography. Tony's a stills guy, from what I can see, but of course many of his conclusions hold perfectly true for film and TV work, even before we take into consideration the fact that the same equipment is sometimes used in each case, if you're shooting DSLR stuff.

Tony's 40-minute documentary – which I strongly recommend everyone should watch - has been widely discussed, often under such headlines as “are camera manufacturers misleading us by not calculating sensor size into specifications?” (by Gannon Burgett of PetaPixel.com), which is more or less a direct a call to arms on the subject. The answer to Gannon's entirely apposite question is – in my view – that we should really all be required to be smart enough to see through the sort of on-the-fly specmanship they're indulging in, although we should do that by knowing what we're doing on a broad basis, not simply by applying mathematics. After all, it's maths that got us in this mess.

So, in much the same way that no plumber can ever look at another plumber's work without sucking air through his teeth and readying a list of expensive objections, I'm going to object not to Tony's conclusions, but perhaps to some of the ways he reaches them.

The disgruntled plumber

The biggest single problem I have with the proposed approach to lens evaluation is the reliance on crop factors – the practice of expressing the difference in field of view between two differently-sized sensors using the same lens as a multiplier of focal length. A smaller APS-C sensor, for instance, has a field of view similar to a full-frame 35mm sensor with a lens 1.6 times longer. This is, without doubt, a useful shorthand for experienced photographers used to the performance of lenses on full-frame cameras, but that's all it is: a shorthand. I've frequently heard people talking about lenses on smaller-sensor cameras as if their absolute optical performance has changed, which is of course untrue.

This misunderstanding would be bad enough – it messes with the intrinsic reciprocality of many aspects of photography, complicates depth-of-field calculations, and other things. That, though, isn't the biggest problem. The biggest problem is simply that this practice has been seized upon by manufacturers and used to promote their products. The Panasonic FZ200, for  instance, is widely described (in fact it's printed on the side) as having a 25-600mm f/2.8 lens. An f-stop is the ratio between the focal length and the diameter of the circle the light has to pass through to get through the lens, so a 600mm f/2.8 lens needs to be physically something close to 600/2.8 = 214mm across. Now, that's a bit unfair, because there are optical techniques which can enhance light gathering without having to follow that rule precisely, but it's pathetically obvious that the lens on the FZ200 is certainly not big enough to achieve 600mm at f/2.8. Canon does have a 600mm f/4 prime in their L series. It costs thirteen thousand American dollars and is the size and weight of a healthy St Bernard puppy.

What Panasonic is doing there, of course, is expressing the focal length as full-frame 35mm equivalent, as opposed to giving its actual focal length range – but it's stating the aperture as it actually is. This is confusing, and it's hard to argue with Tony Northrup's objections here.

Crop factors are dangerous

The issue I do have is that Tony's suggestion is to multiply up the aperture numbers as well, on the basis that the smaller sensor needs more light to produce an image of similar brightness with similar noise as a larger one. This is, within certain limits, true, but we're now getting very far away from the reality of how one particular camera works in order to express it in terms intended to describe another, and that's a deep and twisty rabbit hole to head into. I appreciate that crop factors are sometimes a convenient way to estimate required focal lengths, but ultimately relying on them too much provokes exactly these sorts of problems. A 50mm lens is a 50mm lens, regardless of the sensor it's projecting an image onto. We would do well simply to learn our equipment – or shoot tests – and refer to things as they really are. What's more, crop factors are a stills thing: naturally, the behaviour of that 50mm lens differs wildly between the super35 motion picture frame (where it's considered reasonably normal) and the large full-frame 35mm stills frame (where it will feel rather wide), but crop factors aren't widely used in motion picture work. Nobody walks around a film set referring to a 50mm lens by the focal length that would produce the same field of view on another camera system.

Interrelated technology

All of these factors are more or less subject to an issue to which I snuck in an unguarded reference earlier – reciprocality. Traditionally, this is the term used to refer to the fact that – for instance – closing down one f-stop reduces light by half and requires twice the exposure. Or, alternatively, that halving film sensitivity requires opening up one stop. These things are well known, but there are other factors in optical, camera and sensor design that have at least approximately inverse relationships to one another. Make a lens of longer focal length, for instance, and you can use a bigger sensor for the same field of view. That bigger sensor will require a larger projected image for complete coverage, which means the lens must be physically larger to achieve a given f-number. The larger sensor is more sensitive on a per-pixel basis, however, so you might be happy with a lower f-number. A higher resolution sensor of the same sensitivity might exhibit more noise, but then the smaller pixels might make the noise less objectionable. We can go on like this forever.

And ultimately, this is why it's so easy to get confused and so easy to poke holes in discussions of this subject. Everything is interconnected, and that's before we've even considered the very real changes in absolute sensor performance over the past few years, which are aimed at giving us the holy grail: resolution, sensitivity, low noise, and high dynamic range, while simultaneously reducing all the associated tradeoffs in a way that skews any attempt at mathematical calculations. Improvements in glass have the same effect, by giving us lenses which are simultaneously faster, sharper, smaller, and cheaper.

Boiling it down

So maths doesn't work, if your goal is to produce some sort of grand unified camera quality index. There is a natural human tendency to want to know objectively which system is best, as if “best” is easy to define. But there are so many confounding factors, that objectivity is more or less impossible, especially if we take into account the varying requirements of various jobs. There is no combined theory of ideal camera design. If there were, manufacturers would use it at the design stage. All we can do is run tests before we take out a particular outfit on a particular production, stop worrying about it so much, and concentrate on things which make a much, much bigger difference than a 1.6x crop factor when most people are mainly using zooms anyway – things like production design, composition, and lighting.

Tags: Technology

Comments