Follow the Sensors…

March 15, 2010

After the release of the Pentax 645D, Kodak’s “PluggedIn” blog confirmed what many had suspected: Their KAF-40000 sensor was the one Pentax used to create the new camera.

Inside Kodak's Sensor Fab

Inside Kodak's Sensor Fab

Kodak has something of a tradition of announcing when their sensors are being used in high-profile cameras, such as Leica’s M8, M9, and S2—even giving the Kodak catalog numbers.

But their openness on this subject is a bit unusual.

As you may know, creating a modern semiconductor chip fab is staggeringly expensive—up to several billion dollars. So it’s understandable that behind the scenes, sensor chips are mainly manufactured by a few electronics giants.

And selling those chips has become a cut-throat, commodity business; so camera makers sometimes obtain sensors from surprising sources.

But it’s hard to trumpet your own brand superiority, while admitting your camera’s vital guts were built by some competitor. So many of these relationships are not public knowledge.

But if we pay careful attention… We might be able to make some interesting deductions!

Panasonic Sensors

Panasonic Image Sensors

Some of the big names in CMOS image sensors (“CIS” in industry jargon) are familiar brands like Sony, Samsung, and Canon. But cell-phone modules and other utilitarian applications lead the overall sales numbers; and in this market, the leader is a company called Aptina.

No, I didn’t know the name either. But that’s not surprising, since they were only recently spun off from Micron Technology.

Yes, that’s the same Micron who makes computer memory. As it turns out, many of the fab techniques used to produce DRAM apply directly to CMOS sensor manufacture.

Another of the world’s powerhouses in semiconductor manufacturing is Samsung. And it’s widely known that Samsung made the imaging chips used in many Pentax DSLRs. (It would have been hard to keep their relationship a secret: Samsung’s own DSLRs were identical twins of Pentax’s.)

Samsung currently builds a 14.6 megapixel APS-C chip, called the S5K1N1F. Not only was this used in Pentax’s K-7, but also in Samsung’s GX-20. And it’s assumed that Samsung’s new mirrorless NX10 uses essentially the same sensor.

Panasonic’s semiconductor division does offer some smaller CCD sensors for sale, up to 12 megapixels. But with MOS sensors, it is only their partner Olympus who gets to share the Four-Thirds format, 12 Mp “Live MOS” chip used in the Lumix G series.

Meanwhile, it remains mystifying to me that the apparently significant refinements in the GH1 sensor don’t seem to have benefited any other Four Thirds camera yet. (Why?)

As I discussed last week, Sony’s APS-C chips apparently make their way into several Nikon models, as well as the Pentax K-x, the Ricoh GXR A12 modules, and probably the Leica X1.

But Sony has also brought out a new chip for “serious” compact cameras—intended to offer better low-light sensitivity despite its 2 micron pixels. It’s the 1/1.7 format, 10 Mp model ICX685CQZ. You can download a PDF with its specs here.

On the second page of that PDF, note the “number of recommended recording pixels” is 3648 × 2736.

Isn’t it rather remarkable that the top resolution of the Canon G11, and also their S90, matches this to the exact pixel? And how about the Ricoh GRD III, too?

Crypto Sony

Crypto-Sony?

And even Samsung’s newly-announced TL500—it’s the same 3648 x 2736!

None of these cameras provide 720p video—an omission that many reviewers have gnashed their teeth about. However you’ll note in the Sony specs that 720p at 30 frames/sec is not supported by that sensor.

Sony has also made a splash with a smaller 10 Mp sensor, using backside-illumination to compensate for the reduced pixel area. This is the IMX050CQK (with a specs PDF here).

In Table 3 of that PDF, again notice the 3648 x 2736 “recommended” number of pixels. And sure enough, this matches the first Sony cameras released using the chip, the WX1 and TX1.

But if you start poking around, the Ricoh CX3 is another recent camera boasting a backside-illuminated, 10 Mp sensor, which has the same top resolution.

So is the FujiFilm HS10 & HS11. And the Nikon P100.

Now, it seems startling that even Canon and Samsung (who surely can manufacture their own chips) might go to Sony as an outside supplier.

But when you compare CMOS sensors to older CCD technology, their economics are slightly different. CMOS requires much more investment up front, for designing more complex on-chip circuitry, and creating all the layer masks. After that though, the price to produce each part is lower.

After creating a good CMOS chip, there is a strong incentive to sell the heck out of it, to ramp up volumes. Even to a competitor. So we may see more of this backroom horse-trading in sensor chips as time goes on.

In fact, Kodak’s point & shoots introduced in the summer of 2009 actually didn’t use Kodak sensor chips.

But that’s something they didn’t blog about.

Can Technology Save Us?

February 11, 2010

During the past decade, the world of digital cameras has obviously gone through numerous changes.

Now, the aspect I’ve written about most here is the endless (and problematic) escalation of pixel counts. But we should remember that many other facets of camera evolution have been going on in parallel.

Today we can only shake our heads at the postage-stamp LCD screens which were once the norm on digital cameras. And autofocus technology continues to improve (although cameras can still frustrate us, making us miss shots of moving subjects).

Olympus E-1 LCD

In 2003, here's the LCD that $2000 bought you

Moore’s Law has raced onwards. The result is that the proprietary image-processing chips used in each camera get increased “horsepower” with each new generation.

Besides keeping up with the growing image-file sizes, this allows more elaborate sharpening and noise-reduction methods to be applied to each photo. (Whether this noise suppression creates weird and unnatural artifacts is still a question.)

And there are other changes which have helped offset the impact of megapixel escalation. Chip design has improved, reducing the surface area lost to wiring connections. Sensors today are also usually topped with a grid of microlenses, helping funnel most of the light striking the chip onto the active photodetectors.

At the beginning of the digital-camera revolution, CMOS sensors were a bit less developed than CCDs (which had been used in scientific applications for some time). But eventually the new challenges of CMOS technology got ironed out. Today, the DSLRs which lead their market segments all use CMOS sensors.

Not every camera maker is on the same footing, technologically. Companies control different patent portfolios. Many lack their own in-house chip fabs, which can help move innovations to market faster.

So within a given class of cameras (e.g. a particular pixel size), you can still discover performance differences.

But the sum total of all this technology change has been that the better-designed cameras have been able to maintain and even improve image quality, even as pixel pitch continued to shrink.

Can technology keep saving us? Will progress continue forever?

I dispute that it’s even desirable to decrease pixel size further still. But one question is whether there is still some headroom left in sensor technology—allowing sensitivity per unit area to keep increasing. That could compensate for the shrinking area of each pixel.

Well, there are some important things to remember.

The first is that every pixel in a camera sensor is covered by a filter in one of three colors (the Bayer array). And these exist for the purpose of blocking roughly two-thirds of the visible light spectrum.

(There was a Kodak proposal from 2007 for sensors including some “clear” pixels, which would avoid this. But that creates other problems, and I’m not aware of any shipping product based on it.)

The other issue is that there’s a hard theoretical ceiling on how sensitive any photoelectric element can be, no matter its technology. How close a chip approaches that limit is called its quantum efficiency. And out of a theoretically perfect 100%, real sensors today get surprisingly close.

Considering monochrome scientific chips (i.e., no Bayer array), the best conventional microlensed models can average roughly 60% QE in the visible spectrum.

Astrophotographers worry quite a lot about QE. And a well-known one based in France, Christian Buil, actually tested and graphed the QE of various Canon DSLRs. Note, for example, that the Canon 5D did improve its green-channel sensitivity quite a bit in the Mk II version.

In Buil’s bottom graphs notice how much the Bayer color filters limit Canon’s QE compared to one top-of-the-line Kodak monochrome sensor. (The KAF-3200ME has microlenses and 6.8 micron pixels.)

So, seemingly, one area of possible improvement could be improving the color filters used in the Bayer array.

But tri-color filters are a mature technology—having had numerous uses in photography in the many decades before digital. To insure accurate color response, you must design a dye which attenuates little of the desired band; but blocks very effectively outside it. Dyes must also remain colorfast as they age or are exposed to light. Basically, it’s a chemistry problem—and a surprisingly difficult one.

Considering all this, the ability to reach 35% QE (on a pixel basis) in a color-filtered chip is a pretty decent showing already.

Now for years, scientific imagers have used a special trick of  back-illuminating a CCD. This can push QE up to 90% in the photographic spectrum (roughly 400-700 nm) on an unfiltered chip.

And suddenly, camera makers have “invented” the same idea for photography applications. Sony is talking about tiny, point & shoot pixels here, which lose significant area to their opaque structures. So a “nearly twofold” efficiency boost might be feasible in that case.

But we saw that when pixels are larger, back illumination can only improve QE from about 60% to 90% (before filtering).

And it’s much more expensive to fabricate a chip right-side up, flip it over, and carefully thin away the substrate to the exact dimension required. Yields are lower; so when you try to scale it up to larger chips, costs are high. It’s not clear whether this will really be an economical option for DSLR-sized sensors.

Apogee Alta U47

Back-illuminated Astro Camera: 1.0 Mp, $11,000

But wouldn’t it be a massive breakthrough to add 50% more light-gathering ability?

Actually, less than you might think. Remember, that’s only half an f/stop. You get more improvement e.g. in switching from a Four Thirds sensor to an APS-C model, just from the area increase.

So back-illumination is an improvement worth pursuing—especially for cameras using the teeniest chips, which are the most handicapped by undersized pixels.

But beyond that, we start to hit serious limits.

Pure sensor size remains the most important factor in digital-photo image quality. And no geeky wizardry is likely to change that soon.