If you put any faith in DxO Mark scores (and I do understand their limitations), today’s best $500 DSLRs make better images than all of the sub-$4000 cameras sold until 2008. Largely that’s thanks to some good recent APS-C sensors from Sony.

So I’m not sympathetic towards certain Pentax users, who bleat that they can’t make adequate photographs—not until the company delivers a DSLR with a 24×36 mm sensor. (With pre-Photokina Pentax rumors swirling, the issue is once again a hot topic for discussion.)

Full-Frame Pentax?

Full-Frame Pentax?

Yes, many long-time Pentaxians do have investments in full-frame “FA” lenses, ones that could cover the format.

But keep in mind, over the history of photography there have many camera mounts which died out entirely—obsoleting numerous fine lenses. How about Contax/Yashica mount? Minolta MD, or Canon FD? (Or even earlier systems, such as Exakta or the superb Zeiss Contarex?)

By contrast, any Pentax lens made since 1983 can at least be used on a modern DSLR—metering automatically, and with the benefit of built-in shake reduction. So the handwringing that some great lenses have been “orphaned” by Pentax can get a bit exaggerated.

Some point out hopefully that not long ago, Pentax introduced new telephoto lenses bearing the FA designation. Ergo, full-frame is coming!

But truthfully, no decent lens design even breaks a sweat in covering an image circle much smaller than its focal length. With a telephoto, full-frame coverage is basically “free”—so why not go ahead and use the FA labeling? It allowed Pentax to finesse the format question for a while longer; and maybe even sold a couple of extra lenses to Pentax film-body users.

I do agree with the complaint that the viewfinders of most consumer DSLRs are puny and unsatisfying. The view through the eyepiece of a full-frame body is noticeably more generous.

However, the electronic viewfinder of an Olympus E-P2 neatly solves this problem too—at a price savings of about $1400 over today’s cheapest full-frame DSLR. And the EVIL wave is only gathering momentum.

The perpetual cry from full-frame enthusiasts is that Moore’s Law will eventually slash 24x36mm sensor pricing. To me, this seems like a wishful misunderstanding of the facts.

The shrinking of circuit sizes which permits faster processors and denser memory chips is irrelevant to sensor design—the overall chip dimensions are fixed, and the circuitry features are already as small as they need to be.

Also, figuring the costs of CMOS chip production is not entirely straightforward. It costs money to research and develop significant new improvements over prior sensor generations; it’s expensive to create the photolithography masks for all the circuit layers. Then, remember that all these overhead costs must be recouped over only two, perhaps three years of sales. After that, your sensor will be eclipsed by later and even whizzier designs.

Thus, there is more to a sensor price than just the production-line cost; it also depends on the chip quantities sold. And full-frame DSLRs have never been huge sellers.

If APS-C sensors prove entirely satisfactory for 95% of typical photographs (counting both enthusiasts and professionals), a vicious circle results. With no mass-market camera using a full-frame sensor, volumes stay low, prices high. But with sensor prices high, it’s hard to create an appealing camera at a mass-market price point.

Furthermore, let’s consider the few users who would actually benefit from greater sensor area.

For the few who find 12 megapixels inadequate, nudging the number up to 24 Mp is not the dramatic difference you might imagine. To double your linear resolution, you must quadruple the number of pixels. Resolution-hounds would do better to look to medium format cameras, with sensors 40 Mp and up—which conveniently, seem to be dropping towards the $10,000 mark with the release of Pentax’s 645D.

The last full-frame holdouts are those who need extreme high-ISO potential. There’s no doubt that the 12Mp full-frame Nikon D3s makes an astounding showing here, with high-ISO performance that’s a solid 2 f/stops better than any APS-C camera. This is a legitimate advantage of class-leading 24×36 mm sensors.

Yet aside from bragging rights, we have to ask how many great photographs truly require ISO 6,400 and above. ISO 1600 already lets us shoot f/1.7 at 1/30th of a second under some pretty gloomy lighting—like the illumination a computer screen casts in a darkened room.

A day may come when sensor technology has fully matured, and every possible sensitivity tweak has been found. At that point, a particular sensor generation might hang around for a decade or more. So might those long production runs permit much lower unit costs for a full-frame sensor? There will still be a full-frame surcharge simply from the greater surface area and lower chip yields.

But who knows, perhaps it’s possible? By then we may have 40 Mp, 24×36 mm chips that are our new new “medium format.”

BSI: No Panacea

May 21, 2010

In a few earlier posts I have mentioned the new generation of Sony sensors boasting “back-side illumination,” and marketed as Exmor-R (as distinct from Sony’s conventional sensors, just branded Exmor).

Back-side illumination (BSI in the industry jargon) is a tricky and costly chip-fabrication technique, where after depositing all the wiring traces on a silicon wafer, the substrate is flipped over and almost entirely thinned away. This leaves the wiring on the underside of the light-sensitive photodiodes (as Sony describes here), so these unobstructed pixels will theoretically collect more light.

BSI is promoted as one of the technological breakthroughs which might help save image quality, even as manufacturers race to cram more megapixels into tiny sensor areas. In fact, the IMX050CQK actually scaled back its pixel count to 10 Mp, compared to the 12 and 14 that have been becoming increasingly common in the point & shoot market.

Sony BSI Sensor

A Whizzy Small Sensor is Still A Small Sensor

Sony introduced the chip in its own first models in the fall of 2009, for example in the WX1. But clearly Sony found it advantageous to spread the sensor development costs over a larger production run, and apparently they’ve aggressively marketed the chip to other camera makers as well. Pretty much any 10 Mp camera sold this year advertising a backside-illuminated sensor uses it. It seems particularly popular in today’s nutty “Ultra Zoom” market segment.

So I was interested to read the review just posted by Jeff Keller of Nikon’s P100 ultrazoom camera, which uses this chip. See his conclusions here.

As reviews of these new BSI-based cameras filter out, the word seems to be that they do offer decent image quality—but hardly anything revolutionary. If their high-ISO images look smooth, it seems to be partly thanks to noise reduction processing, which can destroy detail and add unnatural, crayon-like artifacts.

Read the rest of this entry »

DxO Labs have released their sensor test results for Olympus’s “econo” Pen model, the E-PL1.

Their tests show it having slightly worse high-ISO performance than its competitors in the “compact EVIL” segment.

DxO Mark for Compact=

DxO Mark for Compact 4/3 Bodies

The entire selling point of Micro Four Thirds is that the larger sensor offers improved picture quality, relative to typical compact cameras. DxO Mark’s “Low-Light ISO” score is expressed in ISO sensitivity numbers; and here they report that noise becomes objectionable at around ISO 500.

But the overall “DxO Mark Sensor” score falls much closer to that of a modern compact camera than to that of a good recent DSLR.

This is a disappointment, given the extra time Olympus had for developing the E-PL1; and also compared to the dramatically-better performance of its Micro Four Thirds cousin, the Panasonic GH1.

As always, note that DxO Labs tests are entirely “numbers oriented” and only analyze the raw sensor data. Handling, price, the quality of in-camera JPEG processing, etc. are not considered.

Making Peace With APS-C

March 19, 2010

It’s shocking to realize it was only six years ago when the first digital SLRs costing under $1000 arrived.

The original Canon Digital Rebel (300D) and Nikon D70 were the first to smash that psychologically-important price barrier. These 6-megapixel pioneers effectively launched the amateur DSLR market we know today.

Sub-$1000 DSLRs

New for 2004: DSLRs for Regular People

But one argument that raged at the time—and which has never completely gone away—was about the “crop format” APS-C sensor size. (The name derives from a doomed film system; but now just means a sensor of about 23 x 15 mm.)

Was APS-C just a temporary, transitional phase? Would DSLRs eventually “grow up,” and return to the 24 x 36 mm dimensions of 135 format?

This was a significant question, at a time when virtually all available lenses were holdovers from manufacturers’ 35mm film-camera systems.

The smaller APS-C sensor meant that every lens got a (frequently-unwanted) increase in its effective focal length. Furthermore, an APS-C sensor only uses about 65% of the film-lens image circle. Why pay extra for larger, heavier, costlier—and possibly dimmer—lenses than you need?

Also, after switching lens focal lengths to get an equivalent angle of view, the APS-C camera will give deeper depth of field at a given distance and aperture. (The difference is a bit over one f/stop’s worth.)

But a more basic question was simply: Do APS-C sensors compromise image quality?

Well, sensor technology (and camera design) have gone through a few generations since 2004.

Microlenses and “fill factors” improved, so that even higher-pixel-count sensors improved sensitivity. (For one illustration, notice how Kodak practically doubled the quantum efficiency of their medium-format sensors between a 2007 sensor and the current one selected for the Pentax 645D.)

Today’s class-leading APS-C models can shoot with decent quality even at ISO 1600. And the typical APS-C 12 megapixel resolution is quite sufficient for any reasonable print size; exceeding what most 35mm film shooters ever achieved.

So it’s clear that APS-C has indeed reached the point of sufficiency for a the great majority of amateur photographers.

Still, Canon and Nikon did eventually introduce DLSRs with “full frame” 24 x 36 mm sensors. They were followed by Sony, then Leica. In part, this reflects the needs of professional shooters, whose investments in full-frame lenses can be enormous.

And for a few rare users, files of over 20 megapixels are sometimes needed. In those cases, a full-frame sensor maintains the large pixel size essential for good performance in noise and dynamic range.

But these are not inexpensive cameras.

So the question has never completely gone away: Is APS-C “the answer” for consumer DSLRs? Or will full-frame sensors eventually trickle down to the affordable end of the market?

Canon Full-Frame Sensor

Canon Full-Frame Sensor

I have mentioned before an interesting Canon PDF Whitepaper which discusses the economics of manufacturing full-frame sensors (start reading at pg. 11).

It’s not simply the 2.5x larger area of the sensor to consider; it’s that unavoidable chip flaws drastically reduce yields as areas get larger. Canon’s discussion concludes,

[...] a full-frame sensor costs not three or four times, but ten, twenty or more times as much as an APS-C sensor.

However, since that 2006 document was written, I have been curious whether the situation has changed.

More recently I found a blog post from 2008 which sheds more light on the subject. (Chipworks is an interesting company, whose business is tearing down and minutely analyzing semiconductor products.)

Note that one cost challenge for full-frame sensors is the rarity of chip fabs who can produce masks of the needed size in one piece. “Stitching” together smaller masks adds to the complexity and cost of producing full-frame sensors—Chipworks was doubtful that the yield of usable full-frame parts was even 50%.

Thus, Chipwork’s best estimate was that a single full-frame sensor costs a camera manufacturer $300 to $400! (This compares to $70-80 for an APS-C sensor.)

And that’s the wholesale cost. What full-frame adds to the price the consumer pays must be higher still.

Thus, it seems unlikely the price of a full-frame DSLR will ever drop under $1000 (that magic number again)—at least, not anytime soon.

And actually, APS-C pretty darned decent.

So let’s embrace APS-C. That means it’s time to reject overweight, overpriced lenses that were designed for the older, larger format. We need to demand sensible, native lens options.

It would have been nice if APS-C had somehow acquired a snappier, less geeky name—maybe that’s still possible?

But it seems time to declare: It’s the standard.

Follow the Sensors…

March 15, 2010

After the release of the Pentax 645D, Kodak’s “PluggedIn” blog confirmed what many had suspected: Their KAF-40000 sensor was the one Pentax used to create the new camera.

Inside Kodak's Sensor Fab

Inside Kodak's Sensor Fab

Kodak has something of a tradition of announcing when their sensors are being used in high-profile cameras, such as Leica’s M8, M9, and S2—even giving the Kodak catalog numbers.

But their openness on this subject is a bit unusual.

As you may know, creating a modern semiconductor chip fab is staggeringly expensive—up to several billion dollars. So it’s understandable that behind the scenes, sensor chips are mainly manufactured by a few electronics giants.

And selling those chips has become a cut-throat, commodity business; so camera makers sometimes obtain sensors from surprising sources.

But it’s hard to trumpet your own brand superiority, while admitting your camera’s vital guts were built by some competitor. So many of these relationships are not public knowledge.

But if we pay careful attention… We might be able to make some interesting deductions!

Panasonic Sensors

Panasonic Image Sensors

Some of the big names in CMOS image sensors (“CIS” in industry jargon) are familiar brands like Sony, Samsung, and Canon. But cell-phone modules and other utilitarian applications lead the overall sales numbers; and in this market, the leader is a company called Aptina.

No, I didn’t know the name either. But that’s not surprising, since they were only recently spun off from Micron Technology.

Yes, that’s the same Micron who makes computer memory. As it turns out, many of the fab techniques used to produce DRAM apply directly to CMOS sensor manufacture.

Another of the world’s powerhouses in semiconductor manufacturing is Samsung. And it’s widely known that Samsung made the imaging chips used in many Pentax DSLRs. (It would have been hard to keep their relationship a secret: Samsung’s own DSLRs were identical twins of Pentax’s.)

Samsung currently builds a 14.6 megapixel APS-C chip, called the S5K1N1F. Not only was this used in Pentax’s K-7, but also in Samsung’s GX-20. And it’s assumed that Samsung’s new mirrorless NX10 uses essentially the same sensor.

Panasonic’s semiconductor division does offer some smaller CCD sensors for sale, up to 12 megapixels. But with MOS sensors, it is only their partner Olympus who gets to share the Four-Thirds format, 12 Mp “Live MOS” chip used in the Lumix G series.

Meanwhile, it remains mystifying to me that the apparently significant refinements in the GH1 sensor don’t seem to have benefited any other Four Thirds camera yet. (Why?)

As I discussed last week, Sony’s APS-C chips apparently make their way into several Nikon models, as well as the Pentax K-x, the Ricoh GXR A12 modules, and probably the Leica X1.

But Sony has also brought out a new chip for “serious” compact cameras—intended to offer better low-light sensitivity despite its 2 micron pixels. It’s the 1/1.7 format, 10 Mp model ICX685CQZ. You can download a PDF with its specs here.

On the second page of that PDF, note the “number of recommended recording pixels” is 3648 × 2736.

Isn’t it rather remarkable that the top resolution of the Canon G11, and also their S90, matches this to the exact pixel? And how about the Ricoh GRD III, too?

Crypto Sony

Crypto-Sony?

And even Samsung’s newly-announced TL500—it’s the same 3648 x 2736!

None of these cameras provide 720p video—an omission that many reviewers have gnashed their teeth about. However you’ll note in the Sony specs that 720p at 30 frames/sec is not supported by that sensor.

Sony has also made a splash with a smaller 10 Mp sensor, using backside-illumination to compensate for the reduced pixel area. This is the IMX050CQK (with a specs PDF here).

In Table 3 of that PDF, again notice the 3648 x 2736 “recommended” number of pixels. And sure enough, this matches the first Sony cameras released using the chip, the WX1 and TX1.

But if you start poking around, the Ricoh CX3 is another recent camera boasting a backside-illuminated, 10 Mp sensor, which has the same top resolution.

So is the FujiFilm HS10 & HS11. And the Nikon P100.

Now, it seems startling that even Canon and Samsung (who surely can manufacture their own chips) might go to Sony as an outside supplier.

But when you compare CMOS sensors to older CCD technology, their economics are slightly different. CMOS requires much more investment up front, for designing more complex on-chip circuitry, and creating all the layer masks. After that though, the price to produce each part is lower.

After creating a good CMOS chip, there is a strong incentive to sell the heck out of it, to ramp up volumes. Even to a competitor. So we may see more of this backroom horse-trading in sensor chips as time goes on.

In fact, Kodak’s point & shoots introduced in the summer of 2009 actually didn’t use Kodak sensor chips.

But that’s something they didn’t blog about.

Something About Sharpness

February 19, 2010

Once upon a time, the way photo-geeks evaluated lens quality was in terms of pure resolution. What was the finest line spacing which a lens could make detectable at all?

Excellent lenses are able to resolve spacings in excess of 100 lines per millimeter at the image plane.

But unfortunately, this measure didn’t correlate very well with our perception looking at real photos of how crisp or “snappy” a lens looked.

The problem is that our eyes themselves are a flawed optical system. We can do tests to determine the minimum spacing between details that it’s possible for our eyes to discern. But as those details become more and more finely spaced, they become less clear, less obvious—even when theoretically detectable.

The aspects of sharpness which are subjectively most apparent actually happen at slightly larger scale than you’d expect, give the eye’s pure resolution limit.

This is the reason why most lens testing has turned to a more relevant—but unfortunately much less intuitive—way to quantify sharpness, namely MTF at specified frequencies.

MTF Chart

Thinner Curves: Finer Details

An MTF graph for a typical lens shows contrast on the vertical axis, and distance from the center of the frame on the horizontal one. The black curves represent the lens with its aperture wide open. Color means the lens has been stopped down to minimize aberrations, usually to f/8 or so. (I’ll leave it to Luminous Landscape to explain the dashed/solid line distinction.)

For the moment, all I want to point out is that that there’s a thicker set of curves and a thinner set.

The thinner curves show the amount of contrast the lens retains at a very fine subject line spacings. The thicker ones represent the contrast at a somewhat coarser line spacing (That’s mnemonically helpful, at least.)

The thick curves correspond well to our subjective sense of the “snap,” or overall contrast that a lens gives. Good lenses can retain most of the original subject contrast right across the frame. Here, this lens is managing almost 80% contrast over a large fraction of the field, even wide open. Very respectable.

The thin curves correspond to a much finer scale—i.e. in your photo subject, can you read tiny lettering, or detect subtle textures?

You can see that preserving contrast at this scale becomes more challenging for an optical design. Wide open, this lens is giving only 50 or 60% of the original subject contrast. After stopping down (thin blue curves), the contrast improves significantly.

When lenses are designed for the full 35mm frame (as this one was) it’s typical to use a spacing of 30 line-pairs per millimeter to draw this “detail” MTF curve.

And having the industry choose this convention wasn’t entirely arbitrary. It’s the scale of fine resolution that seems most visually significant to our eyes.

So if that’s true… let’s consider this number, 30 lp/mm, and see where it takes us.

A full-frame sensor (or 35mm film frame) is 24mm high. So, a 30 lp/mm level of detail corresponds to 720 lines over the entire frame height.

The number “720″ might jog some HDTV associations here. Remember the dispute about whether people can see a difference between 720 and 1080 TV resolutions, when they’re at a sensible viewing distance? (“Jude’s Law,” that we’re comfortable viewing from a distance twice the image diagonal, might be a plausible assumption for photographic prints as well.)

But keep in mind that a 30 line pairs/mm (or cycles/mm in some references) means that you have a black stripe and a white stripe per pair. So if a digital camera sensor is going to resolve those 720 lines, it must have a minimum of  1440 pixels in height (at the Nyquist limit).

In practice, Bayer demosaicing degrades the sensor resolution a bit from the Nyquist limit. (You can see that clearly my test-chart post, where “40″ is at Nyquist).

So we would probably need an extra 1/3 more pixels to get clean resolution: 1920 pixels high, then.

In a 3:2 format, 1920 pixels high would make the width of the sensor 2880 pixels wide. Do you see where this is going?

Multiply those two numbers and you get roughly 5.5 megapixels.

Now, please understand: I am not saying there is NO useful or perceivable detail beyond this scale. I am saying that 5 or 6 Mp captures a substantial fraction of the visually relevant detail.

There are certainly subjects, and styles of photography, where finer detail than this is essential to convey the artistic intention. Anselesque landscapes are one obvious example. You might actually press your nose against an exhibition-sized print in that case.

But if you want to make a substantial resolution improvement—for example, capturing what a lens can resolve at the 60 lp/mm level—remember that you must quadruple, not double the pixel count.

And that tends to cost a bit of money.

Some Theory About Noise

February 14, 2010

The first thing to understand about picture noise (aka grain, speckles) is that it’s already present in the optical image brought into focus on the sensor.

Even when you photograph something featureless and uniform like a blank sky, the light falling onto the sensor isn’t creamy and smooth, like mayonnaise. At microscopic scales, it’s lumpy & gritty.

This is because light consists of individual photons. They sprinkle across the sensor at somewhat random timings and spacings. And eventually you get down to the scale where one tiny area might receive no photons at all, even as its neighbor receives many.

Perhaps one quarter of the photons striking the sensor release an electron—which is stored, then counted by the camera after the exposure. This creates the brightness value recorded for each pixel. (There is also a bit of circuit noise sneaking in, mostly affecting the darkest parts of the image.)

But no matter how carefully a camera is constructed, it is subject to photon noise—sometimes called “shot noise.” You might also hear some murmurs about Poisson statistics, the math describing how this noise is distributed.

When you start from a focused image that’s tiny (as happens with point & shoot cameras), then magnify it by dozens of times, the more noticeable this inherent noise becomes:

Compact Camera Noise Sample

Small Sensors Struggle With Noise

In fact, the only way to reduce the speckles is to average them out over some larger area. However, the best method for doing this requires some consideration.

The most obvious solution is this: You decide what is the minimum spatial resolution needed for your uses (i.e., what pixel count), then simply make each pixel the largest area permissible. Bigger pixels = more photon averaging.

Easy.

Lets recall that quite a nice 8 x 10″ print can be made from a 5 Mp image. The largest inkjet printers bought by ordinary citizens print at 13 inches wide; 9 Mp suffices for this, at least at arm’s length. And any viewing on a computer screen requires far fewer pixels still.

The corollary is that when a photographer does require more pixels (and a few do), you increase the sensor size, not shrink the pixels. For a given illumination level (a particular scene brightness and f/stop) the larger sensor will simply collect more photons in total—allowing better averaging of the photon noise.

But say we take our original sensor area, then subdivide it into many smaller, but noisier, pixels. Their photon counts are bobbling all over the place now! The hope here is that later down the road, we can blend them in some useful way that reduces noise.

One brute-force method is just applying a small-radius blur to the more pixel-dense image. However this will certainly destroy detail too. It’s not clear what advantage this offers compared to starting from a crisp, lower-megapixel image (for one thing, the file size will be larger).

Today, the approach actually taken is to start with the noisier high-megapixel image, then run sophisticated image-processing routines on it. Theoretically, smart algorithms can enhance true detail, like edges; while smoothing shot noise in the areas that are deemed featureless.

This is done on every current digital camera. Yet it must be done much more aggressively when using tiny sensors, like those in compact models or phone-cams.

One argument is that by doing this, we’ve simply turned the matter into a software problem. Throw in Moore’s law, plus ever-more-clever programming, and we may get a better result than the big-pixel solution. I associate the name Nathan Myrvold with this form of techno-optimism (e.g. here). Assuming the files were saved in raw format, you might even go back and improve photos shot in the past.

But it’s important to note the limits of this image-processing, as they apply to real cameras today.

Most inexpensive cameras do not give the option of saving raw sensor files. So before we actually see the image, the camera’s processor chip puts it through a series of steps:

Bayer demosaicing —> Denoising & Sharpening —> JPEG compression

Remember that each pixel on the sensor has a different color filter over it. The true color image must be reconstructed using some interpolation.

The problem is that photon noise affects pixels randomly—without regard to their assigned color. If it happens (and statistically, it will) that several nearby “red” pixels end up too bright (because of random fluctuations), the camera can’t distinguish this from a true red detail within the subject. So, false rainbow blobs can propagate on scales much larger than individual pixels:

100% Color Noise Sample

100% Crop of Color Noise

The next problem is that de-noising and sharpening actually tug in opposite directions. So the camera must make an educated guess about what is a true edge, sharpen that, then blur the rest.

This works pretty well when the processor finds a crisp, high-contrast outline. But low-contrast details (which are very important to our subjective sense of texture) can simply be smudged away.

The result can be a very unnatural, “watercolors” effect. Even when somewhat sharp-edged, blobs of color will be nearly featureless inside those outlines.

Remember this?

Noise Reduction Watercolors

Impressionistic Noise Reduction

Or, combined with a bit more aggressive sharpening, you might get this,

Sharp Smudge Sample

Pastel Crayons This Time?

Clearly, the camera’s guesses about what is true detail can fail in many real-world situations.

There’s an excellent technical PDF from the DxO Labs researchers, discussing (and attempting to quantify) this degradation. Their research was originally oriented towards cell-phone cameras (where these issues are even more severe); but the principles apply to any small-sensor camera dependent on algorithmic signal recovery.

Remember that image processing done within the camera must trade off sophistication against speed and battery consumption. Otherwise, camera performance becomes unacceptable. And larger files tax the write-speed and picture capacity of memory cards; they also take longer to load and edit in our computers.

So there is still an argument for taking the conservative approach.

We can subdivide sensors into more numerous, smaller pixels. But we should stop at the point which is sufficient for our purposes, in order to minimize reliance on this complex, possibly-flawed software image wrangling.

And when aberrations & diffraction limit the pixel count which is actually useful, the argument becomes even stronger.

Follow

Get every new post delivered to your Inbox.