<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Noise: why it's the ultimate limiting factor in the quality of an image

4 minute read

RedShark Replay: RedShark Technical Editor Phil Rhodes delivers a dose of sobering news: some noise in imaging systems is impossible to eliminate. Blame physics.

We're used to thinking of the light that hits a photosensor in a camera as an analogue phenomenon – something that's continuously variable between the blackness of no light, and some maximum which we think of as white. Digital sampling theory, and experience, tells us that most current cameras need ten or twelve or sixteen bits of resolution in order to capture all the variability of light that the sensor can see without visible steps between levels of brightness. The situation differs for audio devices, which often use 24-bit sampling to capture audio wave shapes, which have a much larger range of values (within their own physics) than are found in the image data of current cameras, but ultimately the same techniques are at work. We take an analogue, real-world, continuously-variable value, sample it using a sensor, and record that value as the nearest-available digital number.

Only...well, not quite. Audio levels are, to every practical purpose, really continuously variable, being formed by waves of pressure. Light, on the other hand, is made of photons, or at least photons are the fundamental particle representing light which cause a lot of complicated physics to make sense. As such, a camera is a photon counter, and there are only a certain number of brightness levels which are physically possible. If we consider blackness to be zero photons detected within a certain period of time, the lowest possible value above that is one photon. There is no such thing as half a photon, or half an electron once a piece of silicon has converted the photon into an electron using the photovoltaic effect.

Technical Difficulties, (or, rather, the difficulties of better tech)

Until recently, all practical imaging sensors were imperfect enough that these effects were largely buried in the fizz and snow of other noise sources, and of interest only in a physics laboratory. Scientific instruments such as photomultiplier tubes, of the same broad type as those used in flying-spot telecines, are capable of detecting single photons, but each photosite on a camera's sensor could not reliably do so. However, modern cameras are starting to offer performance that actually approaches the levels needed to make this a problem, being able to detect a few tens of photons at a time.

Consider these figures in the light of cameras such as Sony's mighty Alpha 7S, with its prodigious low light capability and wide dynamic range. The increased dynamic range of modern cameras doesn't (or doesn't much) come from adding to the white end of the range. It mainly comes from adding to the dark end of the range, usually by improving noise performance so that previously too-noisy shadow detail becomes usable picture information. The 7S itself has been criticised, if one could call it a criticism, for having too few stops of dynamic range above its midtones, and comparatively too many below. This is based on an assumption of what "midtone" means that's intrinsic to the camera's inbuilt picture profiles and which may not be valid in the general case, but for the purposes of our discussion, the situation is clear: modern cameras are being improved, among other things, by reducing noise.

Noise by chance

So, because noise is cumulative, reducing conventional sources of noise, such as thermal noise or electromagnetic or radio-frequency interference, is starting to reveal other sources of noise which are hard or impossible to fix. Because of the quantisation of light into photons, there can be a random variation in the image caused by the statistical likelihood of a photon actually striking an object in the scene and then the sensor during one particular exposure. If we photograph a very dark scene with a very sensitive camera, such that our shutter time remains short despite the lack of light, the actual photon counts will be low.

Consider a surface that is 50% grey, so that a photon striking that surface has a 50% chance of being absorbed or reflected. If we image that surface using a million photons, very nearly 500,000 photons (varying slightly based on chance) will bounce off and be available for imaging, so the grey will be fairly accurately recorded. However, if we image that same surface using just one photon, the surface has a 50% chance of being recorded as entirely black, depending on whether that particular photon bounced off or not. And whether that one particular photon bounced off or was absorbed is a matter of chance.

In a more realistic scenario, consider a 16-bit linear light image, which can record more than 65,000 brightness levels. If the camera is configured such that peak white requires less than 65,000 photons to strike the sensor, we're already recording the image with a greater luminance resolution than the underlying physics, and we will see at least one code value of noise. Or, if we image a scene with an 8-bit camera using only 256 photons to illuminate a 50%-reflective subject, on average, 128 photons will be reflected. We can record that at code value 128 in our (linear) digital file, but successive exposures might yield 127 reflected photons, or 129, or other figuers, at random, depending on how many photons actually bounced off the subject during that particular exposure.

Shot noise?

The result is called shot noise, based on the variable chances of a photon actually striking an object in the scene and being projected onto the sensor during that particular exposure. It's a problem familiar to people using electron microscopes, which use electrons rather than light, but suffer the same problem. Very sensitive detectors in such microscopes are able to detect single electrons bouncing off the subject, making a quickly-refreshed, moving-image view of the subject possible. But when the electron beam moves quickly enough to form a video image of the microscopic subject, very few electrons are actually reflected and the image is noisy due to the variable number of electrons reflected from a given point on the subject in a given time period. The high resolution, low noise electron microscope images we see published are scanned very slowly – that is, they use a long pseudo-exposure, of many tens of seconds per frame – to achieve high brightness resolution and low noise in the output image.

Longer exposures alleviate shot noise in photography too, as do higher light levels. However, if we continue to develop cameras with extremely low-noise sensors which become capable of counting small numbers of photons, and if we continue to record those numbers in high resolution digital files, we will quickly hit a point where a certain minimum level of noise cannot be avoided. This may not be objectionable noise, but it may make the least-significant bits of the digital file less useful. In some cases, it probably already is. No matter how hard we work, though, and no matter how well we suppress other sources of noise, in a given optical setup, shot noise cannot be eliminated, as a matter of fundamental physics. 

Tags: Technology

Comments