Can Technology Save Us?
February 11, 2010
During the past decade, the world of digital cameras has obviously gone through numerous changes.
Now, the aspect I’ve written about most here is the endless (and problematic) escalation of pixel counts. But we should remember that many other facets of camera evolution have been going on in parallel.
Today we can only shake our heads at the postage-stamp LCD screens which were once the norm on digital cameras. And autofocus technology continues to improve (although cameras can still frustrate us, making us miss shots of moving subjects).
Moore’s Law has raced onwards. The result is that the proprietary image-processing chips used in each camera get increased “horsepower” with each new generation.
Besides keeping up with the growing image-file sizes, this allows more elaborate sharpening and noise-reduction methods to be applied to each photo. (Whether this noise suppression creates weird and unnatural artifacts is still a question.)
And there are other changes which have helped offset the impact of megapixel escalation. Chip design has improved, reducing the surface area lost to wiring connections. Sensors today are also usually topped with a grid of microlenses, helping funnel most of the light striking the chip onto the active photodetectors.
At the beginning of the digital-camera revolution, CMOS sensors were a bit less developed than CCDs (which had been used in scientific applications for some time). But eventually the new challenges of CMOS technology got ironed out. Today, the DSLRs which lead their market segments all use CMOS sensors.
Not every camera maker is on the same footing, technologically. Companies control different patent portfolios. Many lack their own in-house chip fabs, which can help move innovations to market faster.
So within a given class of cameras (e.g. a particular pixel size), you can still discover performance differences.
But the sum total of all this technology change has been that the better-designed cameras have been able to maintain and even improve image quality, even as pixel pitch continued to shrink.
Can technology keep saving us? Will progress continue forever?
I dispute that it’s even desirable to decrease pixel size further still. But one question is whether there is still some headroom left in sensor technology—allowing sensitivity per unit area to keep increasing. That could compensate for the shrinking area of each pixel.
Well, there are some important things to remember.
The first is that every pixel in a camera sensor is covered by a filter in one of three colors (the Bayer array). And these exist for the purpose of blocking roughly two-thirds of the visible light spectrum.
(There was a Kodak proposal from 2007 for sensors including some “clear” pixels, which would avoid this. But that creates other problems, and I’m not aware of any shipping product based on it.)
The other issue is that there’s a hard theoretical ceiling on how sensitive any photoelectric element can be, no matter its technology. How close a chip approaches that limit is called its quantum efficiency. And out of a theoretically perfect 100%, real sensors today get surprisingly close.
Considering monochrome scientific chips (i.e., no Bayer array), the best conventional microlensed models can average roughly 60% QE in the visible spectrum.
Astrophotographers worry quite a lot about QE. And a well-known one based in France, Christian Buil, actually tested and graphed the QE of various Canon DSLRs. Note, for example, that the Canon 5D did improve its green-channel sensitivity quite a bit in the Mk II version.
In Buil’s bottom graphs notice how much the Bayer color filters limit Canon’s QE compared to one top-of-the-line Kodak monochrome sensor. (The KAF-3200ME has microlenses and 6.8 micron pixels.)
So, seemingly, one area of possible improvement could be improving the color filters used in the Bayer array.
But tri-color filters are a mature technology—having had numerous uses in photography in the many decades before digital. To insure accurate color response, you must design a dye which attenuates little of the desired band; but blocks very effectively outside it. Dyes must also remain colorfast as they age or are exposed to light. Basically, it’s a chemistry problem—and a surprisingly difficult one.
Considering all this, the ability to reach 35% QE (on a pixel basis) in a color-filtered chip is a pretty decent showing already.
Now for years, scientific imagers have used a special trick of back-illuminating a CCD. This can push QE up to 90% in the photographic spectrum (roughly 400-700 nm) on an unfiltered chip.
And suddenly, camera makers have “invented” the same idea for photography applications. Sony is talking about tiny, point & shoot pixels here, which lose significant area to their opaque structures. So a “nearly twofold” efficiency boost might be feasible in that case.
But we saw that when pixels are larger, back illumination can only improve QE from about 60% to 90% (before filtering).
And it’s much more expensive to fabricate a chip right-side up, flip it over, and carefully thin away the substrate to the exact dimension required. Yields are lower; so when you try to scale it up to larger chips, costs are high. It’s not clear whether this will really be an economical option for DSLR-sized sensors.
But wouldn’t it be a massive breakthrough to add 50% more light-gathering ability?
Actually, less than you might think. Remember, that’s only half an f/stop. You get more improvement e.g. in switching from a Four Thirds sensor to an APS-C model, just from the area increase.
So back-illumination is an improvement worth pursuing—especially for cameras using the teeniest chips, which are the most handicapped by undersized pixels.
But beyond that, we start to hit serious limits.
Pure sensor size remains the most important factor in digital-photo image quality. And no geeky wizardry is likely to change that soon.