If you put any faith in DxO Mark scores (and I do understand their limitations), today’s best $500 DSLRs make better images than all of the sub-$4000 cameras sold until 2008. Largely that’s thanks to some good recent APS-C sensors from Sony.

So I’m not sympathetic towards certain Pentax users, who bleat that they can’t make adequate photographs—not until the company delivers a DSLR with a 24×36 mm sensor. (With pre-Photokina Pentax rumors swirling, the issue is once again a hot topic for discussion.)

Full-Frame Pentax?

Full-Frame Pentax?

Yes, many long-time Pentaxians do have investments in full-frame “FA” lenses, ones that could cover the format.

But keep in mind, over the history of photography there have many camera mounts which died out entirely—obsoleting numerous fine lenses. How about Contax/Yashica mount? Minolta MD, or Canon FD? (Or even earlier systems, such as Exakta or the superb Zeiss Contarex?)

By contrast, any Pentax lens made since 1983 can at least be used on a modern DSLR—metering automatically, and with the benefit of built-in shake reduction. So the handwringing that some great lenses have been “orphaned” by Pentax can get a bit exaggerated.

Some point out hopefully that not long ago, Pentax introduced new telephoto lenses bearing the FA designation. Ergo, full-frame is coming!

But truthfully, no decent lens design even breaks a sweat in covering an image circle much smaller than its focal length. With a telephoto, full-frame coverage is basically “free”—so why not go ahead and use the FA labeling? It allowed Pentax to finesse the format question for a while longer; and maybe even sold a couple of extra lenses to Pentax film-body users.

I do agree with the complaint that the viewfinders of most consumer DSLRs are puny and unsatisfying. The view through the eyepiece of a full-frame body is noticeably more generous.

However, the electronic viewfinder of an Olympus E-P2 neatly solves this problem too—at a price savings of about $1400 over today’s cheapest full-frame DSLR. And the EVIL wave is only gathering momentum.

The perpetual cry from full-frame enthusiasts is that Moore’s Law will eventually slash 24x36mm sensor pricing. To me, this seems like a wishful misunderstanding of the facts.

The shrinking of circuit sizes which permits faster processors and denser memory chips is irrelevant to sensor design—the overall chip dimensions are fixed, and the circuitry features are already as small as they need to be.

Also, figuring the costs of CMOS chip production is not entirely straightforward. It costs money to research and develop significant new improvements over prior sensor generations; it’s expensive to create the photolithography masks for all the circuit layers. Then, remember that all these overhead costs must be recouped over only two, perhaps three years of sales. After that, your sensor will be eclipsed by later and even whizzier designs.

Thus, there is more to a sensor price than just the production-line cost; it also depends on the chip quantities sold. And full-frame DSLRs have never been huge sellers.

If APS-C sensors prove entirely satisfactory for 95% of typical photographs (counting both enthusiasts and professionals), a vicious circle results. With no mass-market camera using a full-frame sensor, volumes stay low, prices high. But with sensor prices high, it’s hard to create an appealing camera at a mass-market price point.

Furthermore, let’s consider the few users who would actually benefit from greater sensor area.

For the few who find 12 megapixels inadequate, nudging the number up to 24 Mp is not the dramatic difference you might imagine. To double your linear resolution, you must quadruple the number of pixels. Resolution-hounds would do better to look to medium format cameras, with sensors 40 Mp and up—which conveniently, seem to be dropping towards the $10,000 mark with the release of Pentax’s 645D.

The last full-frame holdouts are those who need extreme high-ISO potential. There’s no doubt that the 12Mp full-frame Nikon D3s makes an astounding showing here, with high-ISO performance that’s a solid 2 f/stops better than any APS-C camera. This is a legitimate advantage of class-leading 24×36 mm sensors.

Yet aside from bragging rights, we have to ask how many great photographs truly require ISO 6,400 and above. ISO 1600 already lets us shoot f/1.7 at 1/30th of a second under some pretty gloomy lighting—like the illumination a computer screen casts in a darkened room.

A day may come when sensor technology has fully matured, and every possible sensitivity tweak has been found. At that point, a particular sensor generation might hang around for a decade or more. So might those long production runs permit much lower unit costs for a full-frame sensor? There will still be a full-frame surcharge simply from the greater surface area and lower chip yields.

But who knows, perhaps it’s possible? By then we may have 40 Mp, 24×36 mm chips that are our new new “medium format.”

BSI: No Panacea

May 21, 2010

In a few earlier posts I have mentioned the new generation of Sony sensors boasting “back-side illumination,” and marketed as Exmor-R (as distinct from Sony’s conventional sensors, just branded Exmor).

Back-side illumination (BSI in the industry jargon) is a tricky and costly chip-fabrication technique, where after depositing all the wiring traces on a silicon wafer, the substrate is flipped over and almost entirely thinned away. This leaves the wiring on the underside of the light-sensitive photodiodes (as Sony describes here), so these unobstructed pixels will theoretically collect more light.

BSI is promoted as one of the technological breakthroughs which might help save image quality, even as manufacturers race to cram more megapixels into tiny sensor areas. In fact, the IMX050CQK actually scaled back its pixel count to 10 Mp, compared to the 12 and 14 that have been becoming increasingly common in the point & shoot market.

Sony BSI Sensor

A Whizzy Small Sensor is Still A Small Sensor

Sony introduced the chip in its own first models in the fall of 2009, for example in the WX1. But clearly Sony found it advantageous to spread the sensor development costs over a larger production run, and apparently they’ve aggressively marketed the chip to other camera makers as well. Pretty much any 10 Mp camera sold this year advertising a backside-illuminated sensor uses it. It seems particularly popular in today’s nutty “Ultra Zoom” market segment.

So I was interested to read the review just posted by Jeff Keller of Nikon’s P100 ultrazoom camera, which uses this chip. See his conclusions here.

As reviews of these new BSI-based cameras filter out, the word seems to be that they do offer decent image quality—but hardly anything revolutionary. If their high-ISO images look smooth, it seems to be partly thanks to noise reduction processing, which can destroy detail and add unnatural, crayon-like artifacts.

Read the rest of this entry »

DxO Labs have released their sensor test results for Olympus’s “econo” Pen model, the E-PL1.

Their tests show it having slightly worse high-ISO performance than its competitors in the “compact EVIL” segment.

DxO Mark for Compact=

DxO Mark for Compact 4/3 Bodies

The entire selling point of Micro Four Thirds is that the larger sensor offers improved picture quality, relative to typical compact cameras. DxO Mark’s “Low-Light ISO” score is expressed in ISO sensitivity numbers; and here they report that noise becomes objectionable at around ISO 500.

But the overall “DxO Mark Sensor” score falls much closer to that of a modern compact camera than to that of a good recent DSLR.

This is a disappointment, given the extra time Olympus had for developing the E-PL1; and also compared to the dramatically-better performance of its Micro Four Thirds cousin, the Panasonic GH1.

As always, note that DxO Labs tests are entirely “numbers oriented” and only analyze the raw sensor data. Handling, price, the quality of in-camera JPEG processing, etc. are not considered.

Making Peace With APS-C

March 19, 2010

It’s shocking to realize it was only six years ago when the first digital SLRs costing under $1000 arrived.

The original Canon Digital Rebel (300D) and Nikon D70 were the first to smash that psychologically-important price barrier. These 6-megapixel pioneers effectively launched the amateur DSLR market we know today.

Sub-$1000 DSLRs

New for 2004: DSLRs for Regular People

But one argument that raged at the time—and which has never completely gone away—was about the “crop format” APS-C sensor size. (The name derives from a doomed film system; but now just means a sensor of about 23 x 15 mm.)

Was APS-C just a temporary, transitional phase? Would DSLRs eventually “grow up,” and return to the 24 x 36 mm dimensions of 135 format?

This was a significant question, at a time when virtually all available lenses were holdovers from manufacturers’ 35mm film-camera systems.

The smaller APS-C sensor meant that every lens got a (frequently-unwanted) increase in its effective focal length. Furthermore, an APS-C sensor only uses about 65% of the film-lens image circle. Why pay extra for larger, heavier, costlier—and possibly dimmer—lenses than you need?

Also, after switching lens focal lengths to get an equivalent angle of view, the APS-C camera will give deeper depth of field at a given distance and aperture. (The difference is a bit over one f/stop’s worth.)

But a more basic question was simply: Do APS-C sensors compromise image quality?

Well, sensor technology (and camera design) have gone through a few generations since 2004.

Microlenses and “fill factors” improved, so that even higher-pixel-count sensors improved sensitivity. (For one illustration, notice how Kodak practically doubled the quantum efficiency of their medium-format sensors between a 2007 sensor and the current one selected for the Pentax 645D.)

Today’s class-leading APS-C models can shoot with decent quality even at ISO 1600. And the typical APS-C 12 megapixel resolution is quite sufficient for any reasonable print size; exceeding what most 35mm film shooters ever achieved.

So it’s clear that APS-C has indeed reached the point of sufficiency for a the great majority of amateur photographers.

Still, Canon and Nikon did eventually introduce DLSRs with “full frame” 24 x 36 mm sensors. They were followed by Sony, then Leica. In part, this reflects the needs of professional shooters, whose investments in full-frame lenses can be enormous.

And for a few rare users, files of over 20 megapixels are sometimes needed. In those cases, a full-frame sensor maintains the large pixel size essential for good performance in noise and dynamic range.

But these are not inexpensive cameras.

So the question has never completely gone away: Is APS-C “the answer” for consumer DSLRs? Or will full-frame sensors eventually trickle down to the affordable end of the market?

Canon Full-Frame Sensor

Canon Full-Frame Sensor

I have mentioned before an interesting Canon PDF Whitepaper which discusses the economics of manufacturing full-frame sensors (start reading at pg. 11).

It’s not simply the 2.5x larger area of the sensor to consider; it’s that unavoidable chip flaws drastically reduce yields as areas get larger. Canon’s discussion concludes,

[...] a full-frame sensor costs not three or four times, but ten, twenty or more times as much as an APS-C sensor.

However, since that 2006 document was written, I have been curious whether the situation has changed.

More recently I found a blog post from 2008 which sheds more light on the subject. (Chipworks is an interesting company, whose business is tearing down and minutely analyzing semiconductor products.)

Note that one cost challenge for full-frame sensors is the rarity of chip fabs who can produce masks of the needed size in one piece. “Stitching” together smaller masks adds to the complexity and cost of producing full-frame sensors—Chipworks was doubtful that the yield of usable full-frame parts was even 50%.

Thus, Chipwork’s best estimate was that a single full-frame sensor costs a camera manufacturer $300 to $400! (This compares to $70-80 for an APS-C sensor.)

And that’s the wholesale cost. What full-frame adds to the price the consumer pays must be higher still.

Thus, it seems unlikely the price of a full-frame DSLR will ever drop under $1000 (that magic number again)—at least, not anytime soon.

And actually, APS-C pretty darned decent.

So let’s embrace APS-C. That means it’s time to reject overweight, overpriced lenses that were designed for the older, larger format. We need to demand sensible, native lens options.

It would have been nice if APS-C had somehow acquired a snappier, less geeky name—maybe that’s still possible?

But it seems time to declare: It’s the standard.

Follow the Sensors…

March 15, 2010

After the release of the Pentax 645D, Kodak’s “PluggedIn” blog confirmed what many had suspected: Their KAF-40000 sensor was the one Pentax used to create the new camera.

Inside Kodak's Sensor Fab

Inside Kodak's Sensor Fab

Kodak has something of a tradition of announcing when their sensors are being used in high-profile cameras, such as Leica’s M8, M9, and S2—even giving the Kodak catalog numbers.

But their openness on this subject is a bit unusual.

As you may know, creating a modern semiconductor chip fab is staggeringly expensive—up to several billion dollars. So it’s understandable that behind the scenes, sensor chips are mainly manufactured by a few electronics giants.

And selling those chips has become a cut-throat, commodity business; so camera makers sometimes obtain sensors from surprising sources.

But it’s hard to trumpet your own brand superiority, while admitting your camera’s vital guts were built by some competitor. So many of these relationships are not public knowledge.

But if we pay careful attention… We might be able to make some interesting deductions!

Panasonic Sensors

Panasonic Image Sensors

Some of the big names in CMOS image sensors (“CIS” in industry jargon) are familiar brands like Sony, Samsung, and Canon. But cell-phone modules and other utilitarian applications lead the overall sales numbers; and in this market, the leader is a company called Aptina.

No, I didn’t know the name either. But that’s not surprising, since they were only recently spun off from Micron Technology.

Yes, that’s the same Micron who makes computer memory. As it turns out, many of the fab techniques used to produce DRAM apply directly to CMOS sensor manufacture.

Another of the world’s powerhouses in semiconductor manufacturing is Samsung. And it’s widely known that Samsung made the imaging chips used in many Pentax DSLRs. (It would have been hard to keep their relationship a secret: Samsung’s own DSLRs were identical twins of Pentax’s.)

Samsung currently builds a 14.6 megapixel APS-C chip, called the S5K1N1F. Not only was this used in Pentax’s K-7, but also in Samsung’s GX-20. And it’s assumed that Samsung’s new mirrorless NX10 uses essentially the same sensor.

Panasonic’s semiconductor division does offer some smaller CCD sensors for sale, up to 12 megapixels. But with MOS sensors, it is only their partner Olympus who gets to share the Four-Thirds format, 12 Mp “Live MOS” chip used in the Lumix G series.

Meanwhile, it remains mystifying to me that the apparently significant refinements in the GH1 sensor don’t seem to have benefited any other Four Thirds camera yet. (Why?)

As I discussed last week, Sony’s APS-C chips apparently make their way into several Nikon models, as well as the Pentax K-x, the Ricoh GXR A12 modules, and probably the Leica X1.

But Sony has also brought out a new chip for “serious” compact cameras—intended to offer better low-light sensitivity despite its 2 micron pixels. It’s the 1/1.7 format, 10 Mp model ICX685CQZ. You can download a PDF with its specs here.

On the second page of that PDF, note the “number of recommended recording pixels” is 3648 × 2736.

Isn’t it rather remarkable that the top resolution of the Canon G11, and also their S90, matches this to the exact pixel? And how about the Ricoh GRD III, too?

Crypto Sony


And even Samsung’s newly-announced TL500—it’s the same 3648 x 2736!

None of these cameras provide 720p video—an omission that many reviewers have gnashed their teeth about. However you’ll note in the Sony specs that 720p at 30 frames/sec is not supported by that sensor.

Sony has also made a splash with a smaller 10 Mp sensor, using backside-illumination to compensate for the reduced pixel area. This is the IMX050CQK (with a specs PDF here).

In Table 3 of that PDF, again notice the 3648 x 2736 “recommended” number of pixels. And sure enough, this matches the first Sony cameras released using the chip, the WX1 and TX1.

But if you start poking around, the Ricoh CX3 is another recent camera boasting a backside-illuminated, 10 Mp sensor, which has the same top resolution.

So is the FujiFilm HS10 & HS11. And the Nikon P100.

Now, it seems startling that even Canon and Samsung (who surely can manufacture their own chips) might go to Sony as an outside supplier.

But when you compare CMOS sensors to older CCD technology, their economics are slightly different. CMOS requires much more investment up front, for designing more complex on-chip circuitry, and creating all the layer masks. After that though, the price to produce each part is lower.

After creating a good CMOS chip, there is a strong incentive to sell the heck out of it, to ramp up volumes. Even to a competitor. So we may see more of this backroom horse-trading in sensor chips as time goes on.

In fact, Kodak’s point & shoots introduced in the summer of 2009 actually didn’t use Kodak sensor chips.

But that’s something they didn’t blog about.

Something About Sharpness

February 19, 2010

Once upon a time, the way photo-geeks evaluated lens quality was in terms of pure resolution. What was the finest line spacing which a lens could make detectable at all?

Excellent lenses are able to resolve spacings in excess of 100 lines per millimeter at the image plane.

But unfortunately, this measure didn’t correlate very well with our perception looking at real photos of how crisp or “snappy” a lens looked.

The problem is that our eyes themselves are a flawed optical system. We can do tests to determine the minimum spacing between details that it’s possible for our eyes to discern. But as those details become more and more finely spaced, they become less clear, less obvious—even when theoretically detectable.

The aspects of sharpness which are subjectively most apparent actually happen at slightly larger scale than you’d expect, give the eye’s pure resolution limit.

This is the reason why most lens testing has turned to a more relevant—but unfortunately much less intuitive—way to quantify sharpness, namely MTF at specified frequencies.

MTF Chart

Thinner Curves: Finer Details

An MTF graph for a typical lens shows contrast on the vertical axis, and distance from the center of the frame on the horizontal one. The black curves represent the lens with its aperture wide open. Color means the lens has been stopped down to minimize aberrations, usually to f/8 or so. (I’ll leave it to Luminous Landscape to explain the dashed/solid line distinction.)

For the moment, all I want to point out is that that there’s a thicker set of curves and a thinner set.

The thinner curves show the amount of contrast the lens retains at a very fine subject line spacings. The thicker ones represent the contrast at a somewhat coarser line spacing (That’s mnemonically helpful, at least.)

The thick curves correspond well to our subjective sense of the “snap,” or overall contrast that a lens gives. Good lenses can retain most of the original subject contrast right across the frame. Here, this lens is managing almost 80% contrast over a large fraction of the field, even wide open. Very respectable.

The thin curves correspond to a much finer scale—i.e. in your photo subject, can you read tiny lettering, or detect subtle textures?

You can see that preserving contrast at this scale becomes more challenging for an optical design. Wide open, this lens is giving only 50 or 60% of the original subject contrast. After stopping down (thin blue curves), the contrast improves significantly.

When lenses are designed for the full 35mm frame (as this one was) it’s typical to use a spacing of 30 line-pairs per millimeter to draw this “detail” MTF curve.

And having the industry choose this convention wasn’t entirely arbitrary. It’s the scale of fine resolution that seems most visually significant to our eyes.

So if that’s true… let’s consider this number, 30 lp/mm, and see where it takes us.

A full-frame sensor (or 35mm film frame) is 24mm high. So, a 30 lp/mm level of detail corresponds to 720 lines over the entire frame height.

The number “720″ might jog some HDTV associations here. Remember the dispute about whether people can see a difference between 720 and 1080 TV resolutions, when they’re at a sensible viewing distance? (“Jude’s Law,” that we’re comfortable viewing from a distance twice the image diagonal, might be a plausible assumption for photographic prints as well.)

But keep in mind that a 30 line pairs/mm (or cycles/mm in some references) means that you have a black stripe and a white stripe per pair. So if a digital camera sensor is going to resolve those 720 lines, it must have a minimum of  1440 pixels in height (at the Nyquist limit).

In practice, Bayer demosaicing degrades the sensor resolution a bit from the Nyquist limit. (You can see that clearly my test-chart post, where “40″ is at Nyquist).

So we would probably need an extra 1/3 more pixels to get clean resolution: 1920 pixels high, then.

In a 3:2 format, 1920 pixels high would make the width of the sensor 2880 pixels wide. Do you see where this is going?

Multiply those two numbers and you get roughly 5.5 megapixels.

Now, please understand: I am not saying there is NO useful or perceivable detail beyond this scale. I am saying that 5 or 6 Mp captures a substantial fraction of the visually relevant detail.

There are certainly subjects, and styles of photography, where finer detail than this is essential to convey the artistic intention. Anselesque landscapes are one obvious example. You might actually press your nose against an exhibition-sized print in that case.

But if you want to make a substantial resolution improvement—for example, capturing what a lens can resolve at the 60 lp/mm level—remember that you must quadruple, not double the pixel count.

And that tends to cost a bit of money.

Some Theory About Noise

February 14, 2010

The first thing to understand about picture noise (aka grain, speckles) is that it’s already present in the optical image brought into focus on the sensor.

Even when you photograph something featureless and uniform like a blank sky, the light falling onto the sensor isn’t creamy and smooth, like mayonnaise. At microscopic scales, it’s lumpy & gritty.

This is because light consists of individual photons. They sprinkle across the sensor at somewhat random timings and spacings. And eventually you get down to the scale where one tiny area might receive no photons at all, even as its neighbor receives many.

Perhaps one quarter of the photons striking the sensor release an electron—which is stored, then counted by the camera after the exposure. This creates the brightness value recorded for each pixel. (There is also a bit of circuit noise sneaking in, mostly affecting the darkest parts of the image.)

But no matter how carefully a camera is constructed, it is subject to photon noise—sometimes called “shot noise.” You might also hear some murmurs about Poisson statistics, the math describing how this noise is distributed.

When you start from a focused image that’s tiny (as happens with point & shoot cameras), then magnify it by dozens of times, the more noticeable this inherent noise becomes:

Compact Camera Noise Sample

Small Sensors Struggle With Noise

In fact, the only way to reduce the speckles is to average them out over some larger area. However, the best method for doing this requires some consideration.

The most obvious solution is this: You decide what is the minimum spatial resolution needed for your uses (i.e., what pixel count), then simply make each pixel the largest area permissible. Bigger pixels = more photon averaging.


Lets recall that quite a nice 8 x 10″ print can be made from a 5 Mp image. The largest inkjet printers bought by ordinary citizens print at 13 inches wide; 9 Mp suffices for this, at least at arm’s length. And any viewing on a computer screen requires far fewer pixels still.

The corollary is that when a photographer does require more pixels (and a few do), you increase the sensor size, not shrink the pixels. For a given illumination level (a particular scene brightness and f/stop) the larger sensor will simply collect more photons in total—allowing better averaging of the photon noise.

But say we take our original sensor area, then subdivide it into many smaller, but noisier, pixels. Their photon counts are bobbling all over the place now! The hope here is that later down the road, we can blend them in some useful way that reduces noise.

One brute-force method is just applying a small-radius blur to the more pixel-dense image. However this will certainly destroy detail too. It’s not clear what advantage this offers compared to starting from a crisp, lower-megapixel image (for one thing, the file size will be larger).

Today, the approach actually taken is to start with the noisier high-megapixel image, then run sophisticated image-processing routines on it. Theoretically, smart algorithms can enhance true detail, like edges; while smoothing shot noise in the areas that are deemed featureless.

This is done on every current digital camera. Yet it must be done much more aggressively when using tiny sensors, like those in compact models or phone-cams.

One argument is that by doing this, we’ve simply turned the matter into a software problem. Throw in Moore’s law, plus ever-more-clever programming, and we may get a better result than the big-pixel solution. I associate the name Nathan Myrvold with this form of techno-optimism (e.g. here). Assuming the files were saved in raw format, you might even go back and improve photos shot in the past.

But it’s important to note the limits of this image-processing, as they apply to real cameras today.

Most inexpensive cameras do not give the option of saving raw sensor files. So before we actually see the image, the camera’s processor chip puts it through a series of steps:

Bayer demosaicing —> Denoising & Sharpening —> JPEG compression

Remember that each pixel on the sensor has a different color filter over it. The true color image must be reconstructed using some interpolation.

The problem is that photon noise affects pixels randomly—without regard to their assigned color. If it happens (and statistically, it will) that several nearby “red” pixels end up too bright (because of random fluctuations), the camera can’t distinguish this from a true red detail within the subject. So, false rainbow blobs can propagate on scales much larger than individual pixels:

100% Color Noise Sample

100% Crop of Color Noise

The next problem is that de-noising and sharpening actually tug in opposite directions. So the camera must make an educated guess about what is a true edge, sharpen that, then blur the rest.

This works pretty well when the processor finds a crisp, high-contrast outline. But low-contrast details (which are very important to our subjective sense of texture) can simply be smudged away.

The result can be a very unnatural, “watercolors” effect. Even when somewhat sharp-edged, blobs of color will be nearly featureless inside those outlines.

Remember this?

Noise Reduction Watercolors

Impressionistic Noise Reduction

Or, combined with a bit more aggressive sharpening, you might get this,

Sharp Smudge Sample

Pastel Crayons This Time?

Clearly, the camera’s guesses about what is true detail can fail in many real-world situations.

There’s an excellent technical PDF from the DxO Labs researchers, discussing (and attempting to quantify) this degradation. Their research was originally oriented towards cell-phone cameras (where these issues are even more severe); but the principles apply to any small-sensor camera dependent on algorithmic signal recovery.

Remember that image processing done within the camera must trade off sophistication against speed and battery consumption. Otherwise, camera performance becomes unacceptable. And larger files tax the write-speed and picture capacity of memory cards; they also take longer to load and edit in our computers.

So there is still an argument for taking the conservative approach.

We can subdivide sensors into more numerous, smaller pixels. But we should stop at the point which is sufficient for our purposes, in order to minimize reliance on this complex, possibly-flawed software image wrangling.

And when aberrations & diffraction limit the pixel count which is actually useful, the argument becomes even stronger.

Can Technology Save Us?

February 11, 2010

During the past decade, the world of digital cameras has obviously gone through numerous changes.

Now, the aspect I’ve written about most here is the endless (and problematic) escalation of pixel counts. But we should remember that many other facets of camera evolution have been going on in parallel.

Today we can only shake our heads at the postage-stamp LCD screens which were once the norm on digital cameras. And autofocus technology continues to improve (although cameras can still frustrate us, making us miss shots of moving subjects).

Olympus E-1 LCD

In 2003, here's the LCD that $2000 bought you

Moore’s Law has raced onwards. The result is that the proprietary image-processing chips used in each camera get increased “horsepower” with each new generation.

Besides keeping up with the growing image-file sizes, this allows more elaborate sharpening and noise-reduction methods to be applied to each photo. (Whether this noise suppression creates weird and unnatural artifacts is still a question.)

And there are other changes which have helped offset the impact of megapixel escalation. Chip design has improved, reducing the surface area lost to wiring connections. Sensors today are also usually topped with a grid of microlenses, helping funnel most of the light striking the chip onto the active photodetectors.

At the beginning of the digital-camera revolution, CMOS sensors were a bit less developed than CCDs (which had been used in scientific applications for some time). But eventually the new challenges of CMOS technology got ironed out. Today, the DSLRs which lead their market segments all use CMOS sensors.

Not every camera maker is on the same footing, technologically. Companies control different patent portfolios. Many lack their own in-house chip fabs, which can help move innovations to market faster.

So within a given class of cameras (e.g. a particular pixel size), you can still discover performance differences.

But the sum total of all this technology change has been that the better-designed cameras have been able to maintain and even improve image quality, even as pixel pitch continued to shrink.

Can technology keep saving us? Will progress continue forever?

I dispute that it’s even desirable to decrease pixel size further still. But one question is whether there is still some headroom left in sensor technology—allowing sensitivity per unit area to keep increasing. That could compensate for the shrinking area of each pixel.

Well, there are some important things to remember.

The first is that every pixel in a camera sensor is covered by a filter in one of three colors (the Bayer array). And these exist for the purpose of blocking roughly two-thirds of the visible light spectrum.

(There was a Kodak proposal from 2007 for sensors including some “clear” pixels, which would avoid this. But that creates other problems, and I’m not aware of any shipping product based on it.)

The other issue is that there’s a hard theoretical ceiling on how sensitive any photoelectric element can be, no matter its technology. How close a chip approaches that limit is called its quantum efficiency. And out of a theoretically perfect 100%, real sensors today get surprisingly close.

Considering monochrome scientific chips (i.e., no Bayer array), the best conventional microlensed models can average roughly 60% QE in the visible spectrum.

Astrophotographers worry quite a lot about QE. And a well-known one based in France, Christian Buil, actually tested and graphed the QE of various Canon DSLRs. Note, for example, that the Canon 5D did improve its green-channel sensitivity quite a bit in the Mk II version.

In Buil’s bottom graphs notice how much the Bayer color filters limit Canon’s QE compared to one top-of-the-line Kodak monochrome sensor. (The KAF-3200ME has microlenses and 6.8 micron pixels.)

So, seemingly, one area of possible improvement could be improving the color filters used in the Bayer array.

But tri-color filters are a mature technology—having had numerous uses in photography in the many decades before digital. To insure accurate color response, you must design a dye which attenuates little of the desired band; but blocks very effectively outside it. Dyes must also remain colorfast as they age or are exposed to light. Basically, it’s a chemistry problem—and a surprisingly difficult one.

Considering all this, the ability to reach 35% QE (on a pixel basis) in a color-filtered chip is a pretty decent showing already.

Now for years, scientific imagers have used a special trick of  back-illuminating a CCD. This can push QE up to 90% in the photographic spectrum (roughly 400-700 nm) on an unfiltered chip.

And suddenly, camera makers have “invented” the same idea for photography applications. Sony is talking about tiny, point & shoot pixels here, which lose significant area to their opaque structures. So a “nearly twofold” efficiency boost might be feasible in that case.

But we saw that when pixels are larger, back illumination can only improve QE from about 60% to 90% (before filtering).

And it’s much more expensive to fabricate a chip right-side up, flip it over, and carefully thin away the substrate to the exact dimension required. Yields are lower; so when you try to scale it up to larger chips, costs are high. It’s not clear whether this will really be an economical option for DSLR-sized sensors.

Apogee Alta U47

Back-illuminated Astro Camera: 1.0 Mp, $11,000

But wouldn’t it be a massive breakthrough to add 50% more light-gathering ability?

Actually, less than you might think. Remember, that’s only half an f/stop. You get more improvement e.g. in switching from a Four Thirds sensor to an APS-C model, just from the area increase.

So back-illumination is an improvement worth pursuing—especially for cameras using the teeniest chips, which are the most handicapped by undersized pixels.

But beyond that, we start to hit serious limits.

Pure sensor size remains the most important factor in digital-photo image quality. And no geeky wizardry is likely to change that soon.

Sensor Size, Part II

February 7, 2010

As I noted in an earlier post, camera makers quote sensor sizes in mystifying “fractional inch” designations. They’re much less forthcoming in giving us the actual, active dimensions of the chip.

Is this because they’re embarrassed? Even a throwaway Kodak Fun Saver uses the generous dimensions of 35mm film; while today’s $300 digital compacts might use a chip with only 3% of that area.

The common 1/2.33″ or 1/2.5″ sensors used in current point & shoots measure roughly 6mm across. That’s, you know…  not big:

US Penny Versus Tiny Sensor

A 6 x 4.5 mm Sensor vs. Honest Abe

Now, even when you don’t have any “official” specs about the chip used in a camera, it’s usually possible to work out the sensor dimensions indirectly.

All you need is the actual focal length(s) of the camera lens; plus the manufacturer’s stated “35mm equivalents.”

Here’s a camera marked with its true, optical focal lengths. (When the smaller number is under 10mm, you’re seeing true, not “equivalent” focal lengths.)

Compact Camera Lens Markings

Focal Lengths Marked as 5.4 to 10.8mm

The first thing we need to know is that “equivalency” is usually based on the diagonal angle of view of the lens. The next point is that (true) focal lengths scale directly in proportion to the dimensions of the image format.

A frame of 35mm film has these dimensions:

35mm Film Dimensions

A Film Frame Has a Diagonal of 43.3mm

Notice that film’s 43.3mm diagonal is a smaller number than the 70mm “equivalent” f.l. that was quoted for the long end of the zoom range. Telephoto focal lengths will always be longer than the image diagonal.

So, the digital sensor’s diagonal must also be smaller than the lens’s true focal length when zoomed in: 10.8mm.

Divide 43.3 by 70 and you get 0.62; multiply 10.8mm by that and you get 6.7mm as the diagonal of the sensor chip.

Likewise: 43.3 divided by 35 = 1.24; multiply 5.4mm by that and you also get 6.7mm for the diagonal.

But wait, that’s not so useful—didn’t we want to know the chip’s width and height?

Well, compact cameras almost always use 4:3 image proportions (the old “television” aspect ratio). And so, conveniently, the diagonal has a nice easy-to-remember relationship to the sides.

3:4:5 Ratio in Sensor Dimensions

Rembember the 3-4-5 Triangle?

In other words, the chip is 60% as tall as the diagonal; and it’s 80% as wide.

So for the sensor we’re talking about, a 6.7mm diagonal means it’s about 5.3mm wide and 4.0mm tall. This is what the industry calls a 1/2.7″ chip size.

And that’s a lot smaller than Lincoln’s head.

Megapixel Recap

January 20, 2010

I woke this morning to discover that my post about megapixel madness was up to 344 comments on the Reddit “technology” page, as well as generating a lot of talk at The Consumerist. It seems to have struck a nerve.

There’s nothing like having 20,000 people suddenly read your words to make you panic, “could I have explained that a little better?” Let me say a couple of things more clearly:

  • Yes, some people DO need more megapixels

Anyone who makes unusually large prints, or who routinely crops out small areas of the frame, does benefit from higher megapixel counts. However, those pixels are only useful if they can add sharp, noise-free information.

The typical point & shoot CCD would fit on top of a pencil eraser. There are fundamental limits on how much detail you can wring out of them. So, the giant-print-making, ruthlessly-cropping photographer really needs to shop for an “enthusiast” camera model—one with a larger sensor chip.

  • Diffraction sets theoretical limits on image detail

Many more people viewed the “Swindle” post than read my explanation of diffraction. The key point is that even if you have a lens that is well-designed and flawless, light waves will not focus to a perfect point. The small, blurred “Airy disks” set a theoretical limit on how much actual detail a lens can resolve.

Up to a point, “oversampling” a blurry image with denser pixel spacing can be useful. But today’s point & shoots have clearly crossed the line where the pixels are MUCH smaller than the Airy disk, and squeezing in more pixels accomplishes nothing.

Plus, making pixels tinier actually worsens image quality in other respects. Marketing compact cameras by boasting higher megapixel counts is simply dishonest.

  • Higher-quality lenses can’t fix this

Better lenses are preferable to bad ones; but diffraction puts a ceiling on what even the best lens can do (yes, even one with a German brand-name on it).

To get greater true image detail, the entire camera must scale up in size. This makes the Airy disks a smaller fraction of the sensor dimensions.

  • Tiny pixels are low-quality pixels

A pixel that intercepts less light is handicapped from the start. Its weaker signal is closer to the noise floor of the read-out electronics. There’s more random brightness variations between adjacent pixels. Each pixel reaches saturation more quickly—blowing out the highlights to a featureless white.

I’m aware of the theory that higher-resolving but noisier pixels are okay, because in any real-world output, several pixels get blended together. But I’ve seen enough photos with weird “grit” in what ought to be clean blue skies to be suspicious of this.

First, random pixel fluctuations interact in strange ways with the color demosaicing algorithm. Distracting color speckles and rainbowing seem apparent at scales much larger than the individual pixels.

Second, the camera’s noise-reduction algorithm can add its own unnatural artifacts—obscuring true detail with weird daubs of waxy color. (This was the problem highlighted in my example photo.) It’s better to have less noise from the start.

  • Many compacts perform much better than this one

That’s true. But isn’t reading an exaggerated polemic much more fun?

Let me be clear that my complaint is about TINY CHIP point & shoots. The new micro Four Thirds cameras (which I am following closely) were created specifically to address the shortcomings of small-sensor cameras, while remaining pocketable. But they cost a lot, at least so far.

Mainly, my complaint is about honesty. Camera makers are slapping big “14 megapixel” stickers onto cameras with tiny chips.

I just want people to understand that—as The Consumerist headlined it—these are “Marketing Lies.”


Get every new post delivered to your Inbox.