February 19, 2010
The stats show that my post, “the Great Megapixel Swindle,” continues to get quite a lot of traffic. That’s a little unnerving, given that it was written as a quick, off-the-cuff tantrum. If I’d known how many folks would read it, I would have said several things more precisely.
Around the internet, the “Swindle” spawned many, many discussion threads—I can’t keep track of them all. I did try to respond to many of the questions and misunderstandings that I was seeing, in a followup post here. And I’ve expanded on the same issues in many other posts as well.
Today I’ve noticed a thread over at Rangefinder Forum which raises the question, “isn’t it unfair to use a crop from the background? Naturally that looks bad, since it’s out of focus.”
First, the real point this example makes is this: Cameras with tiny pixels must use aggressive post-processing to reduce noise; and this can cause strange, unnatural-looking artifacts. (There’s more on the subject here.)
But on the question of depth of field, I should clarify a bit.
The EXIF data for this shot shows the Olympus FE-26 was set at the wide end of its zoom range—namely 6.3mm. Such a short focal length implies extreme depth of field. The f/stop used was f/4.6.
The H. Lee White is a 700-foot-long Great Lakes freighter. I’m not exactly sure how far the camera was from the subject; but it’s doubtful that it was closer than 15 feet.
Now, we can’t blindly apply standard depth-of-field tables here. The standard calculation (e.g. if you scroll down to Olympus FE-26 here) uses a circle of confusion of 0.005 mm for this sensor format.
But when you are looking at such an extreme enlargement, the assumptions behind that break down (CoC is generally referenced to viewing an 8×10 print at a moderate distance).
But consider that this camera has a sensor size of about 6 x 4.5mm, and so each individual pixel is 1.53 microns wide. In other words, 0.00153 mm.
Clearly, the meaningful circle of confusion can’t be smaller than one pixel. Given the resolution loss that happens with Bayer interpolation, 0.002 mm seems a realistic CoC.
So the out-of-focus blur is actually negligible compared to the pixel size in this case.
You can use an alternate depth of field calculator which lets you input arbitrary values if you’d like to explore this further yourself.
January 20, 2010
I woke this morning to discover that my post about megapixel madness was up to 344 comments on the Reddit “technology” page, as well as generating a lot of talk at The Consumerist. It seems to have struck a nerve.
There’s nothing like having 20,000 people suddenly read your words to make you panic, “could I have explained that a little better?” Let me say a couple of things more clearly:
- Yes, some people DO need more megapixels
Anyone who makes unusually large prints, or who routinely crops out small areas of the frame, does benefit from higher megapixel counts. However, those pixels are only useful if they can add sharp, noise-free information.
The typical point & shoot CCD would fit on top of a pencil eraser. There are fundamental limits on how much detail you can wring out of them. So, the giant-print-making, ruthlessly-cropping photographer really needs to shop for an “enthusiast” camera model—one with a larger sensor chip.
- Diffraction sets theoretical limits on image detail
Many more people viewed the “Swindle” post than read my explanation of diffraction. The key point is that even if you have a lens that is well-designed and flawless, light waves will not focus to a perfect point. The small, blurred “Airy disks” set a theoretical limit on how much actual detail a lens can resolve.
Up to a point, “oversampling” a blurry image with denser pixel spacing can be useful. But today’s point & shoots have clearly crossed the line where the pixels are MUCH smaller than the Airy disk, and squeezing in more pixels accomplishes nothing.
Plus, making pixels tinier actually worsens image quality in other respects. Marketing compact cameras by boasting higher megapixel counts is simply dishonest.
- Higher-quality lenses can’t fix this
Better lenses are preferable to bad ones; but diffraction puts a ceiling on what even the best lens can do (yes, even one with a German brand-name on it).
To get greater true image detail, the entire camera must scale up in size. This makes the Airy disks a smaller fraction of the sensor dimensions.
- Tiny pixels are low-quality pixels
A pixel that intercepts less light is handicapped from the start. Its weaker signal is closer to the noise floor of the read-out electronics. There’s more random brightness variations between adjacent pixels. Each pixel reaches saturation more quickly—blowing out the highlights to a featureless white.
I’m aware of the theory that higher-resolving but noisier pixels are okay, because in any real-world output, several pixels get blended together. But I’ve seen enough photos with weird “grit” in what ought to be clean blue skies to be suspicious of this.
First, random pixel fluctuations interact in strange ways with the color demosaicing algorithm. Distracting color speckles and rainbowing seem apparent at scales much larger than the individual pixels.
Second, the camera’s noise-reduction algorithm can add its own unnatural artifacts—obscuring true detail with weird daubs of waxy color. (This was the problem highlighted in my example photo.) It’s better to have less noise from the start.
- Many compacts perform much better than this one
That’s true. But isn’t reading an exaggerated polemic much more fun?
Let me be clear that my complaint is about TINY CHIP point & shoots. The new micro Four Thirds cameras (which I am following closely) were created specifically to address the shortcomings of small-sensor cameras, while remaining pocketable. But they cost a lot, at least so far.
Mainly, my complaint is about honesty. Camera makers are slapping big “14 megapixel” stickers onto cameras with tiny chips.
I just want people to understand that—as The Consumerist headlined it—these are “Marketing Lies.”