Some Theory About Noise

February 14, 2010

The first thing to understand about picture noise (aka grain, speckles) is that it’s already present in the optical image brought into focus on the sensor.

Even when you photograph something featureless and uniform like a blank sky, the light falling onto the sensor isn’t creamy and smooth, like mayonnaise. At microscopic scales, it’s lumpy & gritty.

This is because light consists of individual photons. They sprinkle across the sensor at somewhat random timings and spacings. And eventually you get down to the scale where one tiny area might receive no photons at all, even as its neighbor receives many.

Perhaps one quarter of the photons striking the sensor release an electron—which is stored, then counted by the camera after the exposure. This creates the brightness value recorded for each pixel. (There is also a bit of circuit noise sneaking in, mostly affecting the darkest parts of the image.)

But no matter how carefully a camera is constructed, it is subject to photon noise—sometimes called “shot noise.” You might also hear some murmurs about Poisson statistics, the math describing how this noise is distributed.

When you start from a focused image that’s tiny (as happens with point & shoot cameras), then magnify it by dozens of times, the more noticeable this inherent noise becomes:

Compact Camera Noise Sample

Small Sensors Struggle With Noise

In fact, the only way to reduce the speckles is to average them out over some larger area. However, the best method for doing this requires some consideration.

The most obvious solution is this: You decide what is the minimum spatial resolution needed for your uses (i.e., what pixel count), then simply make each pixel the largest area permissible. Bigger pixels = more photon averaging.

Easy.

Lets recall that quite a nice 8 x 10″ print can be made from a 5 Mp image. The largest inkjet printers bought by ordinary citizens print at 13 inches wide; 9 Mp suffices for this, at least at arm’s length. And any viewing on a computer screen requires far fewer pixels still.

The corollary is that when a photographer does require more pixels (and a few do), you increase the sensor size, not shrink the pixels. For a given illumination level (a particular scene brightness and f/stop) the larger sensor will simply collect more photons in total—allowing better averaging of the photon noise.

But say we take our original sensor area, then subdivide it into many smaller, but noisier, pixels. Their photon counts are bobbling all over the place now! The hope here is that later down the road, we can blend them in some useful way that reduces noise.

One brute-force method is just applying a small-radius blur to the more pixel-dense image. However this will certainly destroy detail too. It’s not clear what advantage this offers compared to starting from a crisp, lower-megapixel image (for one thing, the file size will be larger).

Today, the approach actually taken is to start with the noisier high-megapixel image, then run sophisticated image-processing routines on it. Theoretically, smart algorithms can enhance true detail, like edges; while smoothing shot noise in the areas that are deemed featureless.

This is done on every current digital camera. Yet it must be done much more aggressively when using tiny sensors, like those in compact models or phone-cams.

One argument is that by doing this, we’ve simply turned the matter into a software problem. Throw in Moore’s law, plus ever-more-clever programming, and we may get a better result than the big-pixel solution. I associate the name Nathan Myrvold with this form of techno-optimism (e.g. here). Assuming the files were saved in raw format, you might even go back and improve photos shot in the past.

But it’s important to note the limits of this image-processing, as they apply to real cameras today.

Most inexpensive cameras do not give the option of saving raw sensor files. So before we actually see the image, the camera’s processor chip puts it through a series of steps:

Bayer demosaicing —> Denoising & Sharpening —> JPEG compression

Remember that each pixel on the sensor has a different color filter over it. The true color image must be reconstructed using some interpolation.

The problem is that photon noise affects pixels randomly—without regard to their assigned color. If it happens (and statistically, it will) that several nearby “red” pixels end up too bright (because of random fluctuations), the camera can’t distinguish this from a true red detail within the subject. So, false rainbow blobs can propagate on scales much larger than individual pixels:

100% Color Noise Sample

100% Crop of Color Noise

The next problem is that de-noising and sharpening actually tug in opposite directions. So the camera must make an educated guess about what is a true edge, sharpen that, then blur the rest.

This works pretty well when the processor finds a crisp, high-contrast outline. But low-contrast details (which are very important to our subjective sense of texture) can simply be smudged away.

The result can be a very unnatural, “watercolors” effect. Even when somewhat sharp-edged, blobs of color will be nearly featureless inside those outlines.

Remember this?

Noise Reduction Watercolors

Impressionistic Noise Reduction

Or, combined with a bit more aggressive sharpening, you might get this,

Sharp Smudge Sample

Pastel Crayons This Time?

Clearly, the camera’s guesses about what is true detail can fail in many real-world situations.

There’s an excellent technical PDF from the DxO Labs researchers, discussing (and attempting to quantify) this degradation. Their research was originally oriented towards cell-phone cameras (where these issues are even more severe); but the principles apply to any small-sensor camera dependent on algorithmic signal recovery.

Remember that image processing done within the camera must trade off sophistication against speed and battery consumption. Otherwise, camera performance becomes unacceptable. And larger files tax the write-speed and picture capacity of memory cards; they also take longer to load and edit in our computers.

So there is still an argument for taking the conservative approach.

We can subdivide sensors into more numerous, smaller pixels. But we should stop at the point which is sufficient for our purposes, in order to minimize reliance on this complex, possibly-flawed software image wrangling.

And when aberrations & diffraction limit the pixel count which is actually useful, the argument becomes even stronger.

3 Responses to “Some Theory About Noise”


  1. […] First, the real point this example makes is this: Cameras with tiny pixels must use aggressive post-processing to reduce noise; and this can cause strange, unnatural-looking artifacts. (There’s more on the subject here.) […]


  2. […] small sensors are prone to all the curses this blog has talked to death already: Picture noise, and ugly noise-reduction; poor dynamic range; and susceptibility to diffraction […]


  3. […] used is about 6 x 4.5mm. That image circle, just 7.5mm, is about the same diameter as a pencil. Photon noise and diffraction blur using such a tiny sensor will always limit image […]


Comments are closed.