02 Mar 2017

From CMOS to capturing single photons: the future of the image sensor

  • Written by 
  • submit to reddit  
Eric Fossum: Counting photons and now counting awards Eric Fossum: Counting photons and now counting awards Thayer School of Engineering

Index

While it's likely decades off from practical use, a multi-national team of engineers just took home the Queen Elizabeth Prize for Engineering for a groundbreaking single photon camera.

There isn't a single global event like the Oscars for people who've made useful widgets, although there probably should be. The engineering world doesn't usually make nearly as much fuss about its achievements as, say, film and TV people. One thing that does happen every year, however, is the Queen Elizabeth prize for engineering, a million-pound award which "celebrates a ground-breaking innovation in engineering." Despite its firmly British origins, the 2017 prize has just been awarded to a multinational team in recognition of its work on imaging sensors over the last several decades.

Meet the team

One of the four winners of the 2017 QEPrize is Eric Fossum, a professor at the Thayer School of Engineering in Dartmouth College. He's interesting for several reasons, including some new thinking which we'll talk about it in detail below. Fossum already enjoyed considerable success as the principal inventor of modern CMOS imaging sensors, specifically developing a way to transfer the accumulated charge away from each pixel, representing a considerable advance on the then state of the art, which had been formalised in the late 1960s.

The second awardee is George Smith, himself the co-inventor (with the late Willard Boyle) of the charge-coupled device which replaced vacuum-tube image sensors and was effectively replaced by Fossum's CMOS technology. Smith was a recipient of a quarter share in the 2009 Nobel Prize in Physics for his work on imaging sensors and holds an honorary fellowship of the Royal Photographic Society, among many other honours.

The third, Nobukazu Teranishi, is a professor at the University of Hyogo and Shizuoka University. His most significant work resulted in the pinned (not to be confused with PIN) photodiode architecture which forms the actual light-sensitive component in modern sensors. Teranishi previously worked for NEC and Panasonic and has been honoured by both the Royal Photographic Society and the Photographic Society of America, as well as the IEEE.

Finally, Michael Tompsett built the first ever camera with a solid-state colour sensor for the English Electric Valve Company in the early 70s. Thompsett has been a Fellow of the Institution of Electrical and Electronic Engineers since 1987. It is, to put it mildly, an exceptionally good group and the 2017 Prize will only add to its already glowing collection of accolades.

Counting photons

Fossum's most recent work is intended to improve the capture rate of imaging sensors – the proportion of photons they actually detect, as opposed to those which pass through unnoticed. This will improve sensitivity, noise floor and dynamic range. One of Fossum's papers, published last summer, talks about a quanta image sensor (QIS) involving a huge number of sensitive sites, each of which is intended to detect a single photon and to do so at very high rates. This would not, on its own, produce a useful image, since light is quantised (hence quantum theory) into chunks of energy – that is, photons – which are either present or absent and can't be subdivided.

An imaging sensor with the capacity to detect only a single photon would produce an image containing only single-bit pixels of full black or full white. The idea of the QIS is to use a very high resolution sensor, then average large areas of individual photon detectors, which they call jots, over a controllable time interval to create conventional pixels. Fossum's paper talks about sensors with a billion individual photon detectors which would be read at rates up to a thousand samples (not really frames) per second. Since both the spatial and temporal resolutions would be greatly divided down in averaging to produce a viewable image, such large numbers are unavoidable, although it would create a 125-gigabyte-per-second data stream if stored unadulterated. The paper does mention the need for schemes to reduce this to something more manageable.

There's also discussion of various approaches to averaging the binary photon data down into conventional pixels. A big part of the idea, in general, is to record the single-photon data at the time of shooting and to handle it later with intelligent processing. This could do all sorts of things, such as aiming to reduce aliasing in high-contrast areas (by averaging overlapping areas) while minimising noise in low-contrast ones (by averaging larger areas). We could take our time in arriving at an image that's arbitrarily attractive to humans. To some extent, this is already done on productions, such as music videos, which might shoot high frame rate 4K and deliver conventional frame rate HD, using the extra data to finesse motion and framing choices in postproduction, safe in the knowledge that scaling things down only increases image quality.



« Prev |


Phil Rhodes

Phil Rhodes is a Cinematographer, Technologist, Writer and above all Communicator. Never afraid to speak his mind, and always worth listening to, he's a frequent contributor to RedShark.

Twitter Feed