RedShark News

21 Dec

Making sense of sensors: Part 1 of 2 Featured

  • Written by 
Making sense of sensors Making sense of sensors Canon/RedShark

Film always used to be the main thing that determined how a photograph looked. Now, it's the sensor. So camera manufacturers have taken on a huge extra responsibility!

With Red's announcement of their new 6K-wide sensor and the Sony F55 with its interesting globally-shuttered arrangement, there's been a lot to talk about in sensor technology recently. We've remarked as well that the move away from film has greatly enfranchised camera manufacturers with responsibility for the light-sensitive components of the design. With this in mind, it might be benificial to investigate some of the principles that govern how sensors work, and how that relates to what appears in the camera brochures.

A few practical techniques

Humanity currently only has access to a few practical techniques for recording moving images, and just three of those (photochemical film, vidicon-type cathode ray imaging tubes, and silicon photovoltaic devices) have ever been widely used for film and TV imaging. With tube cameras long since unknown outside niche applications and film now carefully re-reading the writing on the wall, we're down to one. Both CMOS and CCD sensors use the same underlying physics, and while the science of the photovoltaic effect is outside the scope of this article, we can safely think of either type of sensor as a large grid of very tiny solar panels, the charge on each of which can be measured and used to form an image.

CCDs

Properly, a CCD is the mechanism by which information is read out of a sensor; the actual photosensitive elements are separate, although in a practical device this distinction is slight. Electronically, a CCD is a shift register, with values rippling through it stage by stage (pixel by pixel, in an image sensor). This behaviour is used to transfer the charge on the photosites of each column, line by line, to output electronics which amplify the tiny signals gathered by the photosites.

CMOS

In a common CMOS sensor (or more properly, an active pixel sensor that is manufactured using the CMOS process) each photosite has its own amplifier, which greatly reduces noise because the tiny photosite charges for each column need not be routed through a single, remote amplifier. The CMOS manufacturing process itself makes it possible to include other electronics on the same physical device  – effectively all of them have the analog-to-digital conversion onboard, further reducing noise and making it possible to read out areas of the sensor individually, spreading the load of high frame rate photography and making tricks like selectable sensor windowing practical.

There are subtleties. The simple row-by-row readout of most CMOS sensors is what gives us the undesirable rolling-shutter effect that even very high-end cameras, if only very slightly. A mechanical shutter can completely obviate the problem by physically masking the sensor during readout, which is a technique that was absolutely necessary for some varieties of CCD. Anyone who's seen a Viper start up will recognise the vertical smearing as the mechanical shutter gets into synchronisation; this is simply a matter of the CCD being read out, shuffling the rows of pixels down line by line, while light is still falling on it, which creates an effect both technically analogous and visually similar to a 35mm film camera with a poorly-synchronised shutter.

Global shutter

Applying a global electronic shutter adds significantly to the complexity of the design – one extra transistor per photosite, and that's up to 12 million transistors which need to all work properly on a modern sensor. Even then, since as we've seen all silicon semiconductors are light-sensitive, these shuttering transistors are also light-sensitive, and may achieve only a 2000:1 blanking effectiveness. While that may sound like a lot, bear in mind that a sensor intended to produce 12 bits of noise-free output needs a 4096:1 signal to noise ratio, and many modern cameras advertise more even bits than that.

Ultimately, though, CMOS has made things much easier, and the effect of all this technology is to effectively remove, or at least massively expand, previous limits on things like resolution and frame rate, as well as increasing dynamic range and sensitivity by reducing noise. Semiconductor manufacturing process improvements are generally targeted at reducing feature size and improving reliability in any case, so the ability to make high frame rate 6000-pixel-wide sensors with an acceptable number of flaws is not the feat it would have been even a short time ago.

Part 2 will follow shortly


SIGN IN TO ADD A COMMENT:

 

 

Not registered? Sign up now

Lost your password?

  • You are mixing up erroneously the SNR, the bit-depth and the dynamic range. The dynamic range and the SNR are not directly related, the blanking effectiveness only determines the maximum dynamic range. The SNR determines the quality of the signal, and the meaning is slightly different between the analog and the digital domains (is even different between fixed and float point data). An ideal 12 bit ADC have a theoretical SNR of around 74db (approx 25118864:1). In a sensor the quality of the images is determined in first place by 4 factors: light sensitivity, dynamic range, SNR and the noise floor, plus their relation is quite complex and if we add the ADC the this can be even be more complex.

    0 Like
  • I think we might be talking about different things - the purpose of this article was to discuss the relationship between the noise floor of a sensor and its usable dynamic range. If I've confused that with a discussion of SNR and dynamic range in digital signal processing in general, that's my fault.

    The theoretical dynamic range of a digital signal and its SNR are, as you say, not directly related. However, the practical usable dynamic range of an actual imaging sensor and the noise it produces are very much related as the black point is simply a matter of how much noise you're willing to tolerate, which is a very a subjective thing.

    The reason for the discrepancy in the figures is whether you're considering a power or amplitude quantity; since 10log(25118864) ≈ 20log(2^12) ≈ 74dB plus or minus half a bit or so. The reason I considered this in voltage terms is because the concept of a 12-bit signal offering 2^12 quantisation steps is a lot more familiar to photographic people who are used to thinking in terms of each stop being twice as much light. I appreciate this is not how ADCs are usually characterised in terms of specifying an electronic component.

    0 Like
  • Actually I had make a error, confusing amplitude and power dB.

    If you're in the technical side this article about SNR and CCD sensors can interest you http://www.dspguide.com/ch25/3.htm

    0 Like
  • Actually I had make a error, confusing amplitude and power dB.

    If you're in the technical side this article about SNR and CCD sensors can interest you http://www.dspguide.com/ch25/3.htm

    0 Like
Phil Rhodes

Phil Rhodes is a Cinematographer, Technologist, Writer and above all Communicator. Never afraid to speak his mind, and always worth listening to, he's a frequent contributor to RedShark.

EditShare 2014 © All rights reserved. EditShare Logo

Top Desktop version

music Are you sure that you want to switch to desktop version?