<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

All about Sensors, Lenses and depth of field. Major two part article explains all. PART 2

4 minute read

RedSharkAll about Sensors, Lenses and Depth of Field

Last time, we discussed some of the technical concerns of lens selection and the issues of compatibility with sensors of various configurations and sizes. In this part, we'll look at the effects sensor size has on photography and the engineering compromises behind lenses and sensors


Part one of this two part series on Lenses and Sensors is available here

 

Depth of field

Perhaps the best known maxim about depth of field is that larger sensors give you less of it, and in general this is true, though there are actually many exceptions. For the sake of this discussion, let's consider depth of field for an electronic imaging format to be the region in which subjects are focussed sharply enough to satisfy the practical resolution limits of the sensor, taking into account any low-pass filtering, and overlooking factors such as magnification on display.

Perhaps most importantly for cinematography, things other than sensor size have to change to make the big-sensor-means-short-depth-of-field maxim correct. As we saw previously, simply changing a large sensor for a smaller one, with resolution, lens focal length, f stop and all other factors remaining constant, will simply reduce the field of view; it'll “zoom in”. This, due to the finer pitch of the pixels on the smaller sensor, will actually reduce depth of field over that of the larger sensor. To maintain the same framing, we would have to either alter the focal length of the lens, or possibly move the subject or camera, depending on the scene in question; practicalities of depth of field also involve the specifics of the scene in front of the camera.

But in practical terms, when we consider larger sensors as reducing depth of field, we're not really talking about changing just the sensor, because we don't often change sensor size without changing other things. Of course it's not impossible to do so: someone shooting at various frame rates on a high-speed camera or switching in ETC mode on a Panasonic GH series camera, using the same lens, then viewing the results at a fixed resolution, might change effective sensor size while all else remains equal. More usually, though, we're talking about a differently-sized sensor producing a shot with the same framing, which implies changing the lens too. A larger sensor will usually, assuming the same physical layout of camera and subject, have a longer lens on it to create the same shot.

In this situation, the image has, as we expect, more limited depth of field. The key is magnification; a longer lens produces an image with less size differentiation between foreground and background objects, a depth-compression effect beloved of people shooting a train approaching a damsel tied to the tracks. That enlargement of of background objects, which will tend to be the ones most out of focus, increases the rate at which objects appear to soften as they depart from the plane of focus. Relative to the size of the object, the size of the blur due to defocusing is the same for any lens at a given aperture, but the size of the object relative to the sensor is larger with the longer lens, so the effect is more pronounced.

Still with me? Anyone paying close attention will have realised that in many ways we're beginning to describe a rather self-referential situation, and the most accurate way to describe the relationship between sensor size and depth of field is to say that larger sensors provoke reduced depth of field given the same framing, aperture, and layout of the scene. And that's before we even start to consider hyperfocal photography, or the strange things that happen when the subject distance is close to the focal length.

 

 

Top up your coffee, because yet more circularities abound in the world of pixel sizes and dynamic range. A large sensor has, clearly, bigger pixels than a smaller one of the same resolution. Big pixels are more sensitive simply because a larger sensitive area will receive more light from a given projected image. Further, a bigger pixel might have what's referred to as a larger full well capacity, increasing the number of photons it can count before it becomes full. Both of these increase dynamic range: greater sensitivity increases the dark end of the scale, allowing for more shadow detail, while greater capacity increases the bright end of the scale, allowing for more highlight detail before the image clips to white. And that's a very large technical part of cinematic-looking images, based on the historical response of photochemical film.

As we've mentioned before on Red Shark, the absolute dynamic range of a camera is somewhat subject to interpretation: the bright white clipping point, at which the pixels are at full capacity, is fairly well defined, but the point at which dark shadow detail becomes unusable due to noise is to some extent a matter of opinion. Certainly it's true that the very best cameras – curently F65, Alexa, and the like – boast extremely quiet shadows, due either to careful engineering or active noise reduction (which is a technique external to the sensor). Sensitivity, the ability to work in low light, is therefore also defined by how quiet we want the shadows to be, and this is ultimately a decision for the cinematographer as well as the design engineer.

The circularity comes from the relationship of a given sensor to a given lens. The brightness of the image projected by a lens is proportional to its focal length divided by (broadly) the diameter of the hole through the iris as viewed through the front of the lens, which is the definition of an f number: it's a ratio, which is why it doesn't have any units (the f number will be the same whether you measure the focal length and iris diameter in inches, millimetres, or very tiny fractions of a light year). But from this we can see that the brightness of the image is very much dependent on physical sizes, which is why fast lenses, with low minimum f numbers, tend to become physically bulky, especially if they have a long focal length.

So, larger sensors require longer focal lengths which, all else being equal, will provide dimmer images – but the larger sensor is itself more sensitive. A smaller sensor, with smaller pixels, has reduced sensitivity. The small size requires a lens of shorter focal length which intrinsically has a lower minimum f number than a longer lens of otherwise identical dimensions, and provides a brighter image, potentially offsetting the loss of sensitivity.

This is a world of somewhat circular engineering compromises. Back in reality, these engineering concerns are greatly obscured by clever lens design and issues of ergonomics (such as in an ENG lens with low f number and enormously variable focal length) and optical quality (such as the technical and creative decision by a cinematographer that a lens performs most appropriately at, say, f/4).

I'm grateful to anyone who's stuck with this article through so much theoretical musing on such a dry subject, although ideally, a greater understanding of the underlying engineering issues should make us all more able to specify the most appropriate equipment for a job. But mainly, we should all be grateful that it's possible to rent cleverly-designed glass that makes many of these considerations almost moot in practice.

 

 

Part one of this two part series on Lenses and Sensors is available here

 

 

Tags: Technology

Comments