February 14, 2010
The first thing to understand about picture noise (aka grain, speckles) is that it’s already present in the optical image brought into focus on the sensor.
Even when you photograph something featureless and uniform like a blank sky, the light falling onto the sensor isn’t creamy and smooth, like mayonnaise. At microscopic scales, it’s lumpy & gritty.
This is because light consists of individual photons. They sprinkle across the sensor at somewhat random timings and spacings. And eventually you get down to the scale where one tiny area might receive no photons at all, even as its neighbor receives many.
Perhaps one quarter of the photons striking the sensor release an electron—which is stored, then counted by the camera after the exposure. This creates the brightness value recorded for each pixel. (There is also a bit of circuit noise sneaking in, mostly affecting the darkest parts of the image.)
But no matter how carefully a camera is constructed, it is subject to photon noise—sometimes called “shot noise.” You might also hear some murmurs about Poisson statistics, the math describing how this noise is distributed.
When you start from a focused image that’s tiny (as happens with point & shoot cameras), then magnify it by dozens of times, the more noticeable this inherent noise becomes:
In fact, the only way to reduce the speckles is to average them out over some larger area. However, the best method for doing this requires some consideration.
The most obvious solution is this: You decide what is the minimum spatial resolution needed for your uses (i.e., what pixel count), then simply make each pixel the largest area permissible. Bigger pixels = more photon averaging.
Lets recall that quite a nice 8 x 10″ print can be made from a 5 Mp image. The largest inkjet printers bought by ordinary citizens print at 13 inches wide; 9 Mp suffices for this, at least at arm’s length. And any viewing on a computer screen requires far fewer pixels still.
The corollary is that when a photographer does require more pixels (and a few do), you increase the sensor size, not shrink the pixels. For a given illumination level (a particular scene brightness and f/stop) the larger sensor will simply collect more photons in total—allowing better averaging of the photon noise.
But say we take our original sensor area, then subdivide it into many smaller, but noisier, pixels. Their photon counts are bobbling all over the place now! The hope here is that later down the road, we can blend them in some useful way that reduces noise.
One brute-force method is just applying a small-radius blur to the more pixel-dense image. However this will certainly destroy detail too. It’s not clear what advantage this offers compared to starting from a crisp, lower-megapixel image (for one thing, the file size will be larger).
Today, the approach actually taken is to start with the noisier high-megapixel image, then run sophisticated image-processing routines on it. Theoretically, smart algorithms can enhance true detail, like edges; while smoothing shot noise in the areas that are deemed featureless.
This is done on every current digital camera. Yet it must be done much more aggressively when using tiny sensors, like those in compact models or phone-cams.
One argument is that by doing this, we’ve simply turned the matter into a software problem. Throw in Moore’s law, plus ever-more-clever programming, and we may get a better result than the big-pixel solution. I associate the name Nathan Myrvold with this form of techno-optimism (e.g. here). Assuming the files were saved in raw format, you might even go back and improve photos shot in the past.
But it’s important to note the limits of this image-processing, as they apply to real cameras today.
Most inexpensive cameras do not give the option of saving raw sensor files. So before we actually see the image, the camera’s processor chip puts it through a series of steps:
Bayer demosaicing —> Denoising & Sharpening —> JPEG compression
The problem is that photon noise affects pixels randomly—without regard to their assigned color. If it happens (and statistically, it will) that several nearby “red” pixels end up too bright (because of random fluctuations), the camera can’t distinguish this from a true red detail within the subject. So, false rainbow blobs can propagate on scales much larger than individual pixels:
The next problem is that de-noising and sharpening actually tug in opposite directions. So the camera must make an educated guess about what is a true edge, sharpen that, then blur the rest.
This works pretty well when the processor finds a crisp, high-contrast outline. But low-contrast details (which are very important to our subjective sense of texture) can simply be smudged away.
The result can be a very unnatural, “watercolors” effect. Even when somewhat sharp-edged, blobs of color will be nearly featureless inside those outlines.
Or, combined with a bit more aggressive sharpening, you might get this,
Clearly, the camera’s guesses about what is true detail can fail in many real-world situations.
There’s an excellent technical PDF from the DxO Labs researchers, discussing (and attempting to quantify) this degradation. Their research was originally oriented towards cell-phone cameras (where these issues are even more severe); but the principles apply to any small-sensor camera dependent on algorithmic signal recovery.
Remember that image processing done within the camera must trade off sophistication against speed and battery consumption. Otherwise, camera performance becomes unacceptable. And larger files tax the write-speed and picture capacity of memory cards; they also take longer to load and edit in our computers.
So there is still an argument for taking the conservative approach.
We can subdivide sensors into more numerous, smaller pixels. But we should stop at the point which is sufficient for our purposes, in order to minimize reliance on this complex, possibly-flawed software image wrangling.
And when aberrations & diffraction limit the pixel count which is actually useful, the argument becomes even stronger.
February 5, 2010
It’s not the greatest camera in its class, and it’s not the worst. For a pocketable 8 megapixel model, the P60 is probably about average.
Its pixels are about 1.9 microns across; today, 1.5 or 1.4 microns has become the norm. In other words, the P60’s pixels have 60-80% more light-gathering area than the ones used in a typical 2010 compact.
So let’s take a quick look at how well it handles noise.
We’ll start with an image where the camera is set to ISO 80, the lowest available. And I’ve zoomed to 135e, for a close view of my (charming) model:
This is the high-quality version, to show us what the textures and details ought to look like. (Although notice that the P60, like any small-pixel camera, is struggling to keep the highlights from blowing out.)
Now we zoom out the lens, and look at some detail crops to see how well the image quality holds up. Here’s the same view of the subject, at ISO 80, ISO 200, and ISO 800 (these are now crops using about 40% of the frame width).
It’s not a huge surprise that ISO 800 looks very grainy:
The top of the camera no longer shows its original texture; any apparent detail is just the noise itself.
While at ISO 80 you could still read “München Germany” below the lens, that detail is gone now.
But let’s give Nikon some credit: Chroma noise is very well controlled here, so the speckles do not have distracting “rainbow confetti” colors. Aesthetically, this noise is fairly inoffensive.
However, what may be more worrying is that even at a moderate ISO 200, we still see some anomalies:
Instead of obvious noise, the issue is more subtle here: Rather than looking entirely photographic, the image almost begins to look painted. Noise-reduction processing has kicked in, even at a fairly low ISO—and it’s adding some of its own odd artifacts.
Remember—ideally, it’s supposed to look like this:
Of course, this “painterly” impression is much less noticeable at any reasonable viewing size. We’re pixel-peeping here.
Yet there’s something troubling about a camera that re-draws your photographs—even if it does so very tastefully.
January 29, 2010
My Petavoxel post about the “Great Megapixel Swindle,” from January 19th, has now been viewed more than 60,000 times. (Which I find kind of scary.)
And there were lots of noisy comments elsewhere, complaining that I had stacked the deck unfairly. I made a teensy crop from a cheap camera—and no surprise, it looked bad!
Fair enough. So let’s start over, stacking the deck as much as possible in favor of the photo looking good. Please look at a couple of new images:
(I am grateful to both gentlemen for posting these images under Creative Commons licensing. Paulo was also kind enough to email me his straight-from-camera JPEG; it’s the source of the crops below.)
I choose these because the camera used to take both is the Panasonic Lumix TZ5. This was among the final high-end point & shoot models to receive a full profile from DP Review; so you can have an independent opinion of its quality.
A 9 megapixel model from 2008, the TZ5 has now been replaced by a zoomier 12 Mp version. But the earlier TZ5 remains the second most popular Lumix on Flickr. Its pixels measure about 1.7 microns wide—that’s actually on the large side, compared to today’s typical compacts.
DP Review were rather enthusiastic about this camera, praising the lens in particular. It’s a Leica-branded 10x zoom, with 4 aspheric surfaces and 11 elements, including one of ED glass. This is some rather high-end stuff.
I’d be the first to admit our sample photos look crisp and vibrant at any normal viewing size. And both Paulo and Simon tell me they’re quite pleased with their TZ5’s.
Note that these well-lit shots show off the TZ5 under the best possible circumstances. The sensitivity is at ISO 100 (the lowest setting), which gives the least noise. The shutter speeds are easily high enough to freeze any hand shake at the focal lengths used. But the apertures have been held to the widest possible, which minimizes diffraction.
Even with all these advantages, we begin to see some artifacts that any compact-sensor camera is prone to.
Within the same photo, a white surface in sunlight can be 500 times brighter than a dark surface in shade. That’s about 9 f/stops of difference.
And smaller pixels inherently have lower dynamic range. So even in well-exposed photos like these two, the highlight areas can blow out to a textureless white:
Examining a highlight area in Photoshop, we see that the brightness levels in the selection have completely “hit the ceiling,” so that no detail can be recovered:
So small-sensor cameras will always struggle with high-contrast scenes (unless some tricky HDR processing is used). Yes, shooting in RAW could rescue a bit more from the highlights; but I don’t know of any camera under $350 which offers this option.
What about graininess in the image? (Digital-camera fans use the word “noise,” since we’re discussing electronic signals.)
Even under uniform illumination, photon counts can vary randomly between neighboring pixels. The larger the pixels, the more this averages out. Small pixels inherently have more random brightness variations—that is, higher noise.
So any compact camera must devote some fraction of its processor power to noise reduction, attempting to smooth out those speckles.
Selecting higher ISO sensitivity settings means the pixel noise is amplified even more; and so, noise-reduction must get even more aggressive. Ultimately, this causes a loss of detail, as DP Review’s TZ5 test clearly showed (scroll down to the hair sample, especially).
But our ISO 100 samples should be best-case scenario for noise. Yet even so, the NR processing is leaving some “false detail” in areas that should look smooth:
The effect is stronger in the darker parts of an image. And as you raise ISO, it will just become more obvious.
Might these simply be JPEG artifacts?
These samples are not highly compressed files—the JPEGs are about 4 MB each. Compression with JPEG can add visible “sparklies” around high-contrast edges; but flat areas of color should compress very cleanly. I say this texture comes from noise (or, noise reduction).
A camera of 9 megapixels would seem to promise rather high resolution. But this camera’s sensor is about 1/3rd the area of an aspirin tablet. So is there truly any detail for all those pixels to resolve?
Lets look at some 100% crops near the center of the photos, where lens performance ought to be at its best.
The high-contrast edges here show a lot of “snap.” But the camera’s own sharpening algorithm may take some of the credit for that.
Looking at the lower-contrast details in the shadows (like the hanging tags), the impression is noticeably softer. I feel that the optical resolution is starting to fall apart before we reach individual pixels.
The TZ5 lens gains a whole f/stop when zoomed to the wide end. At this setting, detail is a bit softer still—notice the grille in the archway. At a 4.7mm focal length, depth of field is enormous; so I don’t believe we’re seeing focus error here.
Note that the Airy disk at f/3.3 is about 4.4 microns (for green light). As I discussed before, diffraction means that no lens, no matter how flawless, can focus a pinpoint of light any smaller than this.
But a 4.4 micron-wide blur covers almost four of the TZ5’s pixels. And at any smaller aperture, the Airy disk expands even further. Thus, I remain skeptical that packing in pixels more densely than in the TZ5 would extract more true, optical detail over what we can see here.
Now, let me be clear.
Paulo tells me, “I love my TZ5. It’s my everyday use camera (being much lighter and more compact than the 500D).”
There’s a need for cameras like this. Among pocketable models, it’s probably among the nicest available.
But even under these best-case circumstances, we start to see some limits imposed by its 1.7 micron pixels. And since the TZ5’s release in 2008, pixels in point & shoots have only shrunken more.
It’s not an accident that DP Review made the final winner of their “enthusiast” roundup Panasonic’s own LX3—a small camera, but with a larger sensor. Its pixel density of 24 Mp per sq. cm is half of today’s worst-case models. The result is that the LX3’s “high ISO performance puts most competitors to shame.”
I’m not against small cameras. I’m not even against high megapixels, if you have a genuine need for them (assuming the sensor is large enough).
But today, pixels have shrunk too far. It’s time to stop.
January 22, 2010
I frequently include links here to specs and test reports over at DPReview.com. Of all the photo websites around, they’re the ones who put digital cameras through the most exhaustive, in-house testing.
Unfortunately as the rising tide of interchangeable, me-too point & shoots became a tsunami, they elected to concentrate their efforts on the “enthusiast” side of the camera market. Now they only review compact cameras in an occasional group roundup.
One of the final times DPReview put a point & shoot model through an in-depth review was in February 2008. The subject was the Canon “Elph” SD1100 IS.
The little Elph was a very svelte, 8 Mp model. It is discontinued now—replaced by a higher-megapixel version, of course. Packing in 32 Mp per square centimeter, its pixels nonetheless had about 50% more area than today’s worst-case 14 Mp models.
So it’s worth taking a look down memory lane, and seeing what DPReview found when they tested its noise and sharpness at various ISO settings.
The key thing to notice here is that when the camera’s sensitivity is raised to higher ISO settings, it is “cranking up the volume” on a fainter signal. This inevitably would tend to add noisy speckles to the image; and so the camera’s processor must take desperate measures trying to smooth over them again.
You can see how badly blurred the image becomes at higher ISOs. Scroll down to the hair detail sample, and notice how much fine texture is obliterated, even at just ISO 400. (That speed is about the minimum you’d need to take photos indoors without flash.)
With this kind of blur, more megapixels are clearly not delivering more detail.
For comparison, lets see a similar test on a well-rated current DSLR, the Pentax K-x. Despite being a 12 Mp camera, this gives immensely cleaner and more detailed images. Even ISO 1600 looks quite decent here (to change the ISO selection, mouse over the numbers below the images).
The difference is that the pixels on the Pentax sensor are almost 10 times the light-gathering area of the Elph’s pixels.
In DSLR terms, the K-x is nothing exotic—it’s considered an “upper entry-level” model. It costs about $550 with lens.
But in the world of point & shoots, I’d love to see how a typical current model would do, under DPReview’s standard test regimen. I think the results would be quite illuminating.
However, we can just use common sense. Looking at the Canon images, do you think it would be a good idea to make the pixels any smaller?