No Way to Cheat the Laws of Physics

January 13, 2010

Today I noticed that Pop Photo magazine had added its confirmation to my own informal impressions: The Panasonic GF1, which has otherwise set the digital-photography world aflame, is just not that impressive at high ISOs.

Their review called the GF1’s ISO 800 “barely acceptable.” This jibes with my impressions, from looking hard at a few GF1 high-ISO JPEGs. In the shadows I see the defect I’ve come to call “ink blotching”—hard-edged areas of pure black. I’m very sensitized to this defect, and find it quite distracting. The GF1’s images also seem to give a lot of maze-like texture in darker areas that ought to be smooth.

Panasonic GF1

Panasonic GF1

Since the entire point of “small camera large sensor” bodies is improved low-light performance, this is incredibly frustrating. As I’ve said before, having usable ISO-800 JPEGs is non-negotiable for me. And I’m reluctantly concluding that Micro Four-Thirds, with its inherently smaller pixels, might always be borderline for this.

In fairness, some µ4/3 bodies seem to do a better job. I came across a seemingly-dramatic comparison between the GF1 and the older, more video-oriented GH1. (Unfortunately I can find zero information about the test conditions.) And, if you believe the rumors, a new and improved batch of µ4/3 cameras may be a mere few months away. Could some breakthrough technical fix improve the high-ISO noise performance?

I just came across an exceptionally detailed look at how digital sensors work. And, unfortunately, for fundamental physical reasons, small pixels will always suck. (Take a moment to look through that article, because it’s deeply useful in understanding the issues.)

I had always assumed that circuit noise, or pixel-to-pixel sensitivity variations, were the major contributors to digital-sensor noise. This would be an optimistic viewpoint, because if so, some newer technology might reduce the problems. But that’s not correct.

The basic source of sensor noise is that photons are sprinkled at random across the pixel grid, and end up filling different pixel wells with uneven amounts of charge. Simply put, the only way to lessen the random variability between neighboring pixels is to make each photosite larger, so its greater area intercepts more photons.

On top of that, photons must travel a certain average distance in the silicon substrate before this kicks out an electron. For the yellow-green light our eyes are most sensitive to, it’s about 3.3 microns. When you jam a bunch of 1.5-micron-wide pixels together, as happens in today’s latest point-n-shoots—what happens? It’s likely that the electron ends up in the wrong pixel bin; so the claimed higher resolution is simply bogus from the start.

Also note that the smaller the pixels, the wider the f/stop where the camera reaches diffraction-limited resolution. Sometimes this limit is wider than the actual maximum f/stop of the lens! Which means, again: The claimed megapixel resolution is bogus.

Now admittedly, we do not all need the astounding low-light capabilities of the Nikon D3S, with its 8.5 micron pixels. But it’s clear that pixels smaller than about 6 microns are always going to be somewhat crippled: In high-ISO noise, in dynamic range, and in usable f/stop selections.

So what does that mean for camera design? For different sensor sizes, 6 micron pixels equates to the following megapixel counts:

• Medium Format 33.1 x 44.2mm (5,517 x 7,367) — 40.6 Mp
• Full-frame 24 x 36mm (4000 x 6000) — 24 Mp
• APS-C 1.5x crop (2,633 x 3,933) — 10.4 Mp
• Micro Four-Thirds (2,250 x 3000) — 6.8 Mp
• Typical 1/2.3″ point-n-shoot (770 x 1028) — 0.8 Mp

It’s interesting that in the camera marketplace today, most actual medium format offerings are more conservative than 40 Mp. And current full-frame models have only reached the 24 Mp level quite recently. Not coincidentally, the market for both categories is largely professionals—who are swayed less by marketing, and more by actual photographic results.

But the marketplace for APS-C “cropped format” DSLRs has already overshot the megapixel count which, ultimately, would have  been desirable. However the move from 10 to 12 MP cameras has been modest enough that any drop in performance (on theoretical grounds) so far has been balanced by improvements in sensor design, and in the processor power available for noise reduction.

However the picture for Micro Four-Thirds is not so rosy. This format could have been a winner, if cameramakers had been brave enough stay under 10 megapixels. But no doubt they feared a marketing disaster if they tried to sell a $900+ camera without the “expected” specs for that price.

About point-and-shoots, we can simply say: They’re junk.

Advertisements

3 Responses to “No Way to Cheat the Laws of Physics”

  1. eric Says:

    I remember printing 6MP images back in the day, and you have no problem up to 11×14 or so. I’d be head over heels with a m4/3 camera with giant sized pixels that delivered a clean, noise free 6MP. Its hard these days for people to imagine paying $800 for 6MP but think back only a few short years when the Canon D60 was almost $4k.


  2. […] Second, it uses Kodak-built 40-megapixel sensor  (it’s assumed to be the KAF-40000). Its 6-micron pixels are promised to offer high dynamic range and low noise. And coincidentally, those specs match a “rule of thumb” that I suggested two months ago. […]


  3. […] site Petavoxel besteed behoorlijk wat aandacht aan de megapixel mythe (wel een beetje technisch), maar het komt er […]


Comments are closed.

%d bloggers like this: