<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

The Sony A7R II has a Back Side Illuminated Sensor. How does this work and why is it better?

4 minute read

Sony / RedShark NewsSony a7R II with 42MP back-side illuminated sensor

Here's the run-down on back-side illuminated sensors (like those in the recently announced Sony cameras) including how they differ from more conventional designs and why we're still waiting for an ideal solution.

It's a surprising fact, but true nonetheless, that most of the imaging sensors built to date have been fundamentally backwards. Now, though, new technology is starting to flip things around to a more intuitively correct orientation.

Despite the fact that the new sensors are actually built in the manner that would seem most obvious, with the light-sensitive elements at the front, they're referred to as 'back-side illuminated' because the arrangement is the opposite of what can be found in more conventional designs. To date, most sensors, especially the physically large ones used in digital cinematography cameras, have been built with the light-sensitive parts partially buried under the electrically conductive traces that connect the pixels to the output electronics. It's part of the convenience of a CMOS imaging sensor (as opposed to a CCD) that these electronics can be built directly into the sensor, giving camera designers direct access to digital information without any requirement to handle tiny, fragile analogue signals.

A problem of convenience

The reason the electronics end up in front of the light-sensitive pixels is a convenience of manufacturing. Semiconductors such as imaging sensors are made on wafers of silicon – the round, shiny objects often seen in photographs and documentary footage of the fabrication facilities where such things are manufactured. To produce a useful device, successive layers of conductive metals, separating insulators and other materials are deposited onto the wafer and then etched away, using photolithography (with light-sensitive etch resist and a projected pattern of light to control which areas are etched away and which remain). The processes involved in doing this at the level of precision required by modern electronics are not trivial and there are limitations on what can be done. In particular, the only material that exhibits the photovoltaic effect to a sufficient degree to actually behave as an imaging sensor is silicon. By default, then, the electronics end up being overlaid on top of the light-sensitive pixels.

Needless to say, this is not ideal. Light which would have struck the pixel may strike the electronics instead, reducing sensitivity or increasing noise. The pixel itself may also end up at the bottom of an effective shaft created by stacked layers of electronics; it's at least partly this phenomenon that creates the off-axis problems of modern imaging sensors. If a lens projects with image-space telecentricity – that is, light beams issuing from the back of the lens and striking the sensor are parallel – all is well. Many lenses, however, particularly old favourites from the days of purely analogue photography, don't behave this way because film never required it. This behaviour may also alter with wavelength. Lenses which are not image-space telecentric can therefore vignette visibly, with or without a colour tint towards the corners of frame. Finally, the obscuring electronics affect fill factor (the degree to which there may be gaps between pixels). Poor fill factor, characterized by big gaps between pixels, exacerbates aliasing, requiring a more aggressive optical low-pass filter and potentially affecting perceived sharpness. Some of these problems can be avoided by placing a microlens array over the sensor, so that light falling upon an entire pixel area can be gathered and directed onto the pixel, although this may further exacerbate problems with off-axis rejection.

Approaching an ideal solution

Placing the pixels at the front of the sensor helps to avoid all of these issues, moving toward an ideal sensor with one hundred per cent fill factor and no off-axis rejection. Various approaches to achieving this exist, but current releases, principally by Sony, involve thinning the silicon wafer at the back of the sensor to the point where light may pass through and illuminate the pixels. This raises challenges of its own; silicon is not very transparent and the layer must be extremely thin so as not to lose more light in absorption than the technique would otherwise gain. Usually, at least one layer of a semiconductor must be left thick so that it can be handled and mounted practically. Making backside illuminated sensors practical has required significant new work, particularly in the area of through-silicon vias, which allow layers of silicon to be electrically connected together.

The technique has parallels in nature. Human eyes, strangely enough, are front-illuminated in the same way as conventional sensors, with blood vessels in front of the light-sensitive cells. Conversely, the eyes of Cephalopoda (octopus and the like) must work in the underwater gloom in which these creatures often operate and are built the other way around like a back-illuminated camera sensor. Needless to say, there are downsides to the technique: mechanical handling is one of them, whereby the extremely thin sensor is more at risk of being damaged than a thicker one would be. This is mainly a yield-reducing manufacturing issue, however, since the solution is an exercise in mechanical engineering.

The approach also bears some comparison to the wafer-stacking techniques we've discussed previously, where the light-sensitive pixels can be manufactured using materials solely dedicated to the creation of good photodiodes, then connected to entirely separate layers of electronics. The materials that work well for the pixels don't make for good CMOS electronics, however, so the ability to separate out the manufacturing processes for both makes for better sensors overall. Pre-existing through-silicon vias don't work at a small enough scale for every single pixel to have one, however, and other, specialised approaches which will allow this separate-layer approach to work are still in development. However, it has application in so-called three-dimensional memory arrays, too, so there's enough incentive for research that the future looks good.

So far, the advantages of back-side illuminated sensors have been most apparent in examples where the resolution is high compared to the physical size, where the extra sensitivity is really valuable and where physical handling is easiest. This has meant application in cellphones, although Sony has announced at least one full-frame 35mm-sized sensor using the technology. Perhaps we can expect to see another jump in overall performance soon.

Tags: Technology

Comments