March 25, 2010
As someone who rarely photographs sports or wildlife, the world of long-zooms is not one I pay much attention to.
But there’s trouble brewing lately, in the “bridge camera” market; and it’s reached a point even I can’t ignore.
Yes, just as with megapixels, camera makers have launched another numbers race—this time over zoom range.
Currently, bragging rights go to Olympus, with their SP-800UZ (“ultrazoom,” geddit?). While it shares an alarming 30x zoom range with Fujifilm’s HS10, the Oly is biased more towards the telephoto end. And so, the SP-800UZ wins the zoom war with a zany maximum of 840e!
The outsized lens makes the SP-800UZ one rather odd-looking camera. And despite the SLR-ish hump on top, there is actually no eye-level viewfinder. You must frame using the back LCD—holding the camera away from you.
Now, for those who photograph birds, zoo animals, stadium sporting events, etc., I’ll admit the handiness of having a long zoom range.
But 840e is getting into crazy-magnification territory. Just locating the subject will be a challenge (especially with unsteady arms-length viewing). For reference, we’re talking more than double the magnification you would typically buy in binoculars.
Needless to say, aggressive two-stage antishake becomes mandatory in UltraZoom cameras. Even so, you’d better pray for strong sunlight to keep shutter speeds brief. And once your subjects are really in the distance, no lens will remove the milky haze of the atmosphere itself…
Nonetheless, all the major camera makers are now cranking up their UltraZoom specs. (Panasonic is being the most conservative, with “only” 18x on their FZ35.)
The styling of UltraZooms usually mimics the serious look of a DSLR. But in fact, all of these models employ the same little trick:
The imager in these cameras is nowhere near the APS-C format found in DSLRs. In fact, it’s under 8% of their sensor area. (A so-called 1/2.3″ chip is approximately 4.6 x 6.2 mm.)
This lets camera makers scale down all the lens dimensions by the same proportions. So, a focal range that would require a bazooka-like barrel on APS-C can be made smaller than a beer can instead.
And the volume of glass required for a given lens design (and thus, its weight) shrinks even more dramatically—to almost 1/40th. You can see why camera makers became so fond of this little gimmick.
Also recall that the “brightness” of a lens (its widest f/stop) is actually a ratio: f/numbers are the focal length divided by the aperture diameter.
The physical diameter of the glass limits the second number; so as you zoom in, lens brightness drops. The Olympus SP-800UZ lens specs are 5.0–150 mm focal length; f/2.8–5.6 in aperture. That’s right—you sacrifice two whole f/stops at the long end of the zoom range.
In the case of the Olympus SP-800UZ, this raises another troubling question. To cram 14 megapixels into such a small sensor, each one can only measure 1.44 microns across.
So when you’re fully zoomed in, how sharp can your photo even be? The smallest point of light theoretically possible smears across a dozen pixels. And this is before considering lens aberrations (which are probably significant too, at the extremes of the zoom range).
So if an UltraZoom is what you need, don’t let me stop you from buying one.
But just don’t expect it to rewrite the laws of physics.
March 23, 2010
I can’t think of any greater achievement in press-release puffery than having your claims uncritically repeated in the New York Times.
As many of you have heard, a company called InVisage has announced a “disruptive” improvement in imager sensitivity, through applying a film of quantum dot material. The story has been picked up by lots of photography blogs and websites, including DP Review.
But don’t throw your current camera in the trash quite yet. The Times quoted InVisage’s Jess Lee as saying, “we expect to start production 18 months from now”—with the first shipping sensors designed for phone-cam use.
I have no way of knowing if InVisage’s claims will pan out in real-world products. But it’s interesting that a few people who work in the industry have skeptical things to say.
I do find it exaggerated to claim that the new technology is “four times” better than conventional sensors (95% versus 25% efficiency). Backside-illuminated sensors are shipping today which have much higher sensitivity; and refinements to microlenses and fill-factors are continuing.
However one true advantage of the quantum-dot film is that incoming photons slam to a stop within a very shallow layer (just half a micron thick). This is in contrast to conventional photodiodes, where longer-wavelength (redder) photons might need to travel through 6-8 microns of silicon before generating an electron.
That difference might enable sensors without microlenses to absorb light efficiently even from very oblique angles. It would permit lens designs with shorter back focus (as with rangefinder cameras); and thus we could get more compact cameras overall.
Kodak’s full-frame KAF-18500 CCD, used in the Leica M9, could only achieve the same trick by using special offset microlenses. (And if we are to believe this week’s DxO Mark report, that sensor may have compromised image quality in other ways.)
But I’m still slapping my head at the most ridiculous part of this whole story:
To give an example of what the “quantum film” technology would enable, Mr. Lee noted that we could have an iPhone camera with a 12-megapixel sensor.
Can I scream now? Is the highest goal of our civilization trying to cram more megapixels into a phone-cam? And WHY? But the über-iPhone is just nonsense for other reasons, even if we desired it.
As near as I can tell, an iPhone sensor is about 2.7 x 3.6 mm (a tiny camera module needs an extremely tiny chip). Space is so tight that a larger module would be unacceptable. So, to squish 12 Mp into the current sensor size, each pixel would need to be 0.9 microns wide!
An iPhone’s lens works at f/2.8. At that aperture, the size of the Airy disk (for green light) is 3.8 microns. This is the theoretical limit to sharpness, even if the lens is absolutely flawless and free from aberrations. At the micron scale, any image will be a mushy, diffraction-limited mess.
Also, remember that noise is inherent in the randomness of photon arrival. No matter what technology is used, teensy sensors will always struggle against noise. (The current “solution” is processing the image into painterly color smudges.)
And the dynamic range of a sensor is directly related to pixel size. Micron-scale pixels would certainly give blank, blown-out highlights at the drop of a hat.
But let’s be optimistic that, eventually, this technology will migrate to more sensible pixel sizes. Even if the sensitivity increase only turns out to be one f/stop or so, it would still be welcome. A boost like that could give a µ4/3-sized sensor very usable ISO-1600 performance.
But before we proclaim the revolution has arrived, we’ll need to wait for a few more answers about InVisage’s technology:
- Is the quantum-dot film lightfast, or does it deteriorate over time?
- Is there crosstalk/bleeding between adjacent pixels?
- Is the sensitivity of the quantum-dot film flat across the visible spectrum, or significantly “peaked”? (This would affect color reproduction)
- How reflective is the surface of the film? (Antireflection coatings could be used—but they affect sharpness and angular response)
So, stay tuned for more answers, maybe in the summer of 2011…
March 19, 2010
It’s shocking to realize it was only six years ago when the first digital SLRs costing under $1000 arrived.
The original Canon Digital Rebel (300D) and Nikon D70 were the first to smash that psychologically-important price barrier. These 6-megapixel pioneers effectively launched the amateur DSLR market we know today.
But one argument that raged at the time—and which has never completely gone away—was about the “crop format” APS-C sensor size. (The name derives from a doomed film system; but now just means a sensor of about 23 x 15 mm.)
Was APS-C just a temporary, transitional phase? Would DSLRs eventually “grow up,” and return to the 24 x 36 mm dimensions of 135 format?
This was a significant question, at a time when virtually all available lenses were holdovers from manufacturers’ 35mm film-camera systems.
The smaller APS-C sensor meant that every lens got a (frequently-unwanted) increase in its effective focal length. Furthermore, an APS-C sensor only uses about 65% of the film-lens image circle. Why pay extra for larger, heavier, costlier—and possibly dimmer—lenses than you need?
Also, after switching lens focal lengths to get an equivalent angle of view, the APS-C camera will give deeper depth of field at a given distance and aperture. (The difference is a bit over one f/stop’s worth.)
But a more basic question was simply: Do APS-C sensors compromise image quality?
Well, sensor technology (and camera design) have gone through a few generations since 2004.
Microlenses and “fill factors” improved, so that even higher-pixel-count sensors improved sensitivity. (For one illustration, notice how Kodak practically doubled the quantum efficiency of their medium-format sensors between a 2007 sensor and the current one selected for the Pentax 645D.)
Today’s class-leading APS-C models can shoot with decent quality even at ISO 1600. And the typical APS-C 12 megapixel resolution is quite sufficient for any reasonable print size; exceeding what most 35mm film shooters ever achieved.
So it’s clear that APS-C has indeed reached the point of sufficiency for a the great majority of amateur photographers.
Still, Canon and Nikon did eventually introduce DLSRs with “full frame” 24 x 36 mm sensors. They were followed by Sony, then Leica. In part, this reflects the needs of professional shooters, whose investments in full-frame lenses can be enormous.
And for a few rare users, files of over 20 megapixels are sometimes needed. In those cases, a full-frame sensor maintains the large pixel size essential for good performance in noise and dynamic range.
But these are not inexpensive cameras.
So the question has never completely gone away: Is APS-C “the answer” for consumer DSLRs? Or will full-frame sensors eventually trickle down to the affordable end of the market?
I have mentioned before an interesting Canon PDF Whitepaper which discusses the economics of manufacturing full-frame sensors (start reading at pg. 11).
It’s not simply the 2.5x larger area of the sensor to consider; it’s that unavoidable chip flaws drastically reduce yields as areas get larger. Canon’s discussion concludes,
[...] a full-frame sensor costs not three or four times, but ten, twenty or more times as much as an APS-C sensor.
However, since that 2006 document was written, I have been curious whether the situation has changed.
Note that one cost challenge for full-frame sensors is the rarity of chip fabs who can produce masks of the needed size in one piece. “Stitching” together smaller masks adds to the complexity and cost of producing full-frame sensors—Chipworks was doubtful that the yield of usable full-frame parts was even 50%.
Thus, Chipwork’s best estimate was that a single full-frame sensor costs a camera manufacturer $300 to $400! (This compares to $70-80 for an APS-C sensor.)
And that’s the wholesale cost. What full-frame adds to the price the consumer pays must be higher still.
Thus, it seems unlikely the price of a full-frame DSLR will ever drop under $1000 (that magic number again)—at least, not anytime soon.
And actually, APS-C pretty darned decent.
It would have been nice if APS-C had somehow acquired a snappier, less geeky name—maybe that’s still possible?
But it seems time to declare: It’s the standard.
March 15, 2010
Kodak has something of a tradition of announcing when their sensors are being used in high-profile cameras, such as Leica’s M8, M9, and S2—even giving the Kodak catalog numbers.
But their openness on this subject is a bit unusual.
As you may know, creating a modern semiconductor chip fab is staggeringly expensive—up to several billion dollars. So it’s understandable that behind the scenes, sensor chips are mainly manufactured by a few electronics giants.
And selling those chips has become a cut-throat, commodity business; so camera makers sometimes obtain sensors from surprising sources.
But it’s hard to trumpet your own brand superiority, while admitting your camera’s vital guts were built by some competitor. So many of these relationships are not public knowledge.
But if we pay careful attention… We might be able to make some interesting deductions!
Some of the big names in CMOS image sensors (“CIS” in industry jargon) are familiar brands like Sony, Samsung, and Canon. But cell-phone modules and other utilitarian applications lead the overall sales numbers; and in this market, the leader is a company called Aptina.
No, I didn’t know the name either. But that’s not surprising, since they were only recently spun off from Micron Technology.
Yes, that’s the same Micron who makes computer memory. As it turns out, many of the fab techniques used to produce DRAM apply directly to CMOS sensor manufacture.
Another of the world’s powerhouses in semiconductor manufacturing is Samsung. And it’s widely known that Samsung made the imaging chips used in many Pentax DSLRs. (It would have been hard to keep their relationship a secret: Samsung’s own DSLRs were identical twins of Pentax’s.)
Samsung currently builds a 14.6 megapixel APS-C chip, called the S5K1N1F. Not only was this used in Pentax’s K-7, but also in Samsung’s GX-20. And it’s assumed that Samsung’s new mirrorless NX10 uses essentially the same sensor.
Panasonic’s semiconductor division does offer some smaller CCD sensors for sale, up to 12 megapixels. But with MOS sensors, it is only their partner Olympus who gets to share the Four-Thirds format, 12 Mp “Live MOS” chip used in the Lumix G series.
Meanwhile, it remains mystifying to me that the apparently significant refinements in the GH1 sensor don’t seem to have benefited any other Four Thirds camera yet. (Why?)
As I discussed last week, Sony’s APS-C chips apparently make their way into several Nikon models, as well as the Pentax K-x, the Ricoh GXR A12 modules, and probably the Leica X1.
But Sony has also brought out a new chip for “serious” compact cameras—intended to offer better low-light sensitivity despite its 2 micron pixels. It’s the 1/1.7 format, 10 Mp model ICX685CQZ. You can download a PDF with its specs here.
On the second page of that PDF, note the “number of recommended recording pixels” is 3648 × 2736.
And even Samsung’s newly-announced TL500—it’s the same 3648 x 2736!
None of these cameras provide 720p video—an omission that many reviewers have gnashed their teeth about. However you’ll note in the Sony specs that 720p at 30 frames/sec is not supported by that sensor.
In Table 3 of that PDF, again notice the 3648 x 2736 “recommended” number of pixels. And sure enough, this matches the first Sony cameras released using the chip, the WX1 and TX1.
Now, it seems startling that even Canon and Samsung (who surely can manufacture their own chips) might go to Sony as an outside supplier.
But when you compare CMOS sensors to older CCD technology, their economics are slightly different. CMOS requires much more investment up front, for designing more complex on-chip circuitry, and creating all the layer masks. After that though, the price to produce each part is lower.
After creating a good CMOS chip, there is a strong incentive to sell the heck out of it, to ramp up volumes. Even to a competitor. So we may see more of this backroom horse-trading in sensor chips as time goes on.
But that’s something they didn’t blog about.
March 11, 2010
There’s two interesting blips of Pentax news this week.
Note that the designation “645″ was approximate even in the film era (the actual image area then was 56 x 41.5 mm). The 44 x 33 mm size of the new sensor makes the name completely arbitrary.
Now, ordinarily this blog wouldn’t have much to say about a 40-megapixel camera costing ~$10,000 (not to mention one that is so large and awkward-looking). But there’s two points to make:
First, this is priced at half of what its direct competitors from Mamiya and Hasselblad cost. It’s cheaper than a lot of motorcycles. That gets into range even for obsessive hobbyists or wealthy dabblers.
Second, it uses a Kodak-built 40-megapixel sensor (it’s assumed to be the KAF-40000). Its 6-micron pixels are promised to offer high dynamic range and low noise. And coincidentally, those specs match a “rule of thumb” that I suggested two months ago.
Meanwhile, from this blog’s viewpoint, the week brought a much more significant bit of Pentax news: DxO Mark announced its test results for the K-x.
Keep in mind that DxO Labs only measures pure sensor performance in raw capture; they don’t evaluate a camera’s features, handling, or JPEG quality.
Still, it’s clear that the image quality of the K-x is a big leap forward for Pentax.
This is widely assumed to reflect a switch from Samsung to Sony as the sensor supplier for the K-x. As I mentioned Tuesday, Sony’s current 12 Mp APS-C sensor seems to be at the heart of some highly-respected models from Nikon, Ricoh, and (probably) Leica.
DxO Labs did find that (in common with other Pentax models), ISO 3200 and above have some noise reduction baked in, even with the nominally “raw” file.
This is undesirable in theory—doing the noise reduction later on a desktop computer offers more flexibility and processor horsepower. But for casual JPEG shooters, it’s probably welcome.
In any case, it seems the K-X has better image quality than Canon and Nikon’s current entry-level DSLRs. (DxO Labs, based in France, uses the European designation EOS 1000D, known as the Rebel XS in North America).
And the K-x also holds its own against the more-expensive Canon 500D and Nikon D5000. (The 500D is also called the Rebel T1i. Canon, how do you cook up these weird North American model numbers?)
In case anyone was worrying, reviews say that the K-x does a good job of turning the raw data into a nice-looking JPEG, too.
Today, the Pentax K-x has become the conscience of the camera market.
March 9, 2010
As recently as 2008, the digital-camera market had essentially split into two parts.
On the one side, there were full-featured (but bulky) DSLRs. On the other, there were small (but inadequate) point and shoots.
Today, there is great excitement as camera-makers grope to find some middle ground between those two extremes. The dream is to cram better image quality and more photographic control into a sub-DSLR package.
Counting Micro Four Thirds cameras and “serious compacts,” I can think of a good dozen such models on the market now.
But for a prospective camera shopper, it’s immensely frustrating that despite all the exciting new ideas floating around, none of the current models have “put it all together.” Each seems to combine some worthwhile virtue with some head-slapping flaw.
So let’s fantasize for a moment. What if we could choose all the strong points of many different cameras, and glom them all together into a single one?
We start with the sensor, of course.
There’s a recent crop of cameras praised for a 12 megapixel, APS-C sensor of good quality—particularly at higher ISOs. You can see this in reviews of the Nikon D5000, the Pentax K-x, and the A12 module used in Ricoh’s GXR.
Interestingly, every one of these shares the same 4288 x 2848 resolution—down to the exact pixel. It’s public knowledge that Sony supplies the sensor used in the Nikon; so it’s suggestive that these are all close cousins of a particular Sony chip design.
I’d also be perfectly happy with the Four Thirds sensor used in the Panasonic GH1. It tops all other 4/3 chips in performance; it also permits native 3:2 aspect ratio shooting. But Panasonic seems to have decreed that it will only be used in their premium-priced 1080p video models. That’s okay: the Sony chip is apparently cheap enough to stick into $500 cameras.
Leica’s X1 shows that it’s physically possible to fit a great APS-C sensor into a very svelte, handsome body. (Just ignore its staggering price.) Some are sure its chip is also a Sony; although the 4272 x 2856 pixel specs don’t quite match.
But the weakness of the X1 is that you’re stuck with a non-interchangeable, f/2.8 lens. Obviously that won’t do.
Eventually, Sony intends to join the mirrorless APS-C party; but until then, Samsung’s NX lensmount is the only one available for a mirrorless APS-C body. Happily, there’s already a very decent “wide normal” f/2.0 pancake available, as well as adapters for other mounts.
Naturally it would be preferable to get even brighter lenses: e.g., Panasonic’s excellent 20mm f/1.7 (for µ4/3) is another half-stop brighter. And while we’re mentioning Panasonic: We would definitely want to use their speedy contrast-detect autofocus system, taken from the G-series cameras. It’s clearly superior to Olympus, Leica and Ricoh’s versions.
So, we’ve covered the “IL,” what about the “EV”?
Some prefer an electronic viewfinder to be integral with the body; but I think it’s more flexible to make it removable. When you want maximum compactness, you can leave it at home in a drawer. Or when using prime lenses, some may prefer a dedicated optical viewfinder:
Leica helpfully adds a green focus-confirmation LED to the back of the X1 camera body, which you can see in your peripheral vision when using this accessory viewfinder. But since we’re going to have a socket for an electronic viewfinder anyway… Why not have connector on the OVFs too, to light them up with a few essential display items?
While many are still wary of electronic viewfinders, there are currently several very decent EVF implementations. But, we have to give the nod to the Olympus VF-2 as the one receiving the most favorable press. It’s a 1.4 million dot display, and is nicely adjustable to different angles. So let’s include that in our EVIL mongrel:
Oh, and the Olympus E-Pens prove that you can have in-body image stabilization without having the camera become a chubster—so let’s add that too.
March 7, 2010
Panasonic owes my cat Madeleine an apology.
A few days ago, when specs and images of the new Lumix G models first leaked, I made such aggravated gargling and sputtering noises that she bolted out of the room in fright.
Essentially, Panasonic’s old model G1 (the very first Micro Four Thirds introduction) has now been split into two new models instead.
The G10 is the lighter, de-contented version. It offers a cheapened electronic viewfinder (only 202,000 dots) and no articulating screen. It does add 720p video (which is practically mandatory in today’s market)—but only using the Motion JPEG codec.
The G2 is basically the G1 with the addition of 720p AVCHD Lite video, and a touch-screen interface. It appears to keep the same 1.4 million dot electronic viewfinder as the G1. DP Review found the G1′s EVF large and bright—but jerky and grainy in low light.
The touchscreen can select AF points, or flick through recorded photos (for reference, it has a smaller screen diagonal, but the same resolution as an iPhone or iPod Touch).
The distinction between the two video codecs is this: Motion JPEG compresses each frame individually. This gives good quality, but fills memory cards quickly. AVCHD uses more sophisticated motion-estimation between frames, and so it can achieve much higher file compression. Not all video software can handle AVCHD yet.
Anyway, let’s pause for a moment and run down all the things these two models are not:
They do not introduce any new sensor; nor do they use the proven-superior chip from the GH1.
(Imaging Resource notes that the new models have an improved processor. The new ISO 6400 setting is probably a result of this greater noise-suppressing capability. But I remain skeptical.)
There’s no native multi-aspect-ratio option; only reduced-size crops of the 4:3 area.
They are not new “coat-pocketable” compact models, as was briefly hinted. The G2 and G10 are essentially the same size as the G1.
This is quite strange, as the faux-DSLR styling of the G1 was widely criticized as too conservative and boring. If the goal is to lure over point & shooters who are put off by the bulk of a DSLR, it’s hard to see the point:
There is a new, seemingly less-expensive kit zoom. There was not any announcement of more compact, bright primes like Panasonic’s nice 20mm f/1.7 pancake.
Eventually my cat did forgive me. But I’m still pretty disappointed in Panasonic.
March 5, 2010
If you’re interested in a behind-the-scenes peek into the imaging-chip industry, check out the blog “Image Sensors World.”
Much of this revolves around cell-phone cameras, which today are by far the largest consumer of imaging chips. And that’s a market where the drive for miniaturization is even more extreme than with point & shoot cameras. For a phone-cam to boast 2 megapixels, 4 megapixels, or more, each pixel must be tiny.
At that scale, the light-gathering area of each pixel is so minuscule that back-side illumination practically becomes mandatory. The reasons are well explained in OmniVision’s “technology backgrounder” PDF.
This document’s introduction says,
“Evidently, pixels are getting close to some fundamental physical size limits. With the development of smaller pixels, engineers are asked to pack in as many pixels as possible, often sacrificing image quality.”
Which is an amusingly candid thing to say—considering that they are selling the aforementioned chips packed with “as many pixels as possible.”
What are these “fundamental limits”? Strangely, OmniVision’s document never once mentions the word “diffraction.” But as I’ve sputtered about before, with pixels the size of bacteria, diffraction becomes a serious limitation.
Because of light’s wavelike nature, even an ideal, flawless lens cannot focus light to a perfect point. Instead, you get a microscopic fuzzy blob called the Airy disk.
Now, calling it a “disk” is slightly deceptive: It is significantly brighter in the center than at the edge. Thus, there is still some information to extract by having pixels smaller than the Airy disk. But by the time the Airy disk covers many pixels, no further detail is gained by “packing in” additional ones.
Our eyes are most sensitive to light in the color green. For this wavelength, the Airy disk diameter in microns is the f/ratio times 1.35. (In practice, lens aberrations will make the blur spot larger than this diffraction limit.)
But even using a perfect lens that is diffraction-limited at f/2.3, the Airy disk would cover four 1.1 micron pixels.
A perfect lens working at f/3.5 (which is more realistic for most zooms) will have an Airy disk covering nine pixels of 1.1 micron width. This is one of the “fundamental physical size limits” mentioned in OmniVision’s document.
Manufacturing a back-illuminated chip is quite complex. And for OmniVision to be able to crank them out in quantity is a technological tour de force. As I wrote earlier, there are still a few tweaks left to make imaging chips more sensitive per unit area; this is one of them.
Perhaps this helps explain another curiously candid statement I saw recently. Sony executive Masashi “Tiger” Imamura was discussing the “megapixel race” in a PMA interview with Imaging Resource. And he said,
” …making the pixel smaller on the imager, requires a lot of new technology development. [...] So, as somebody said, the race was not good for the customers, but on the other hand, good for us to develop the technologies. Do you understand?”
March 2, 2010
DP Review’s first test of a Ricoh GXR module has alerted me that I’d misunderstood Ricoh’s naming scheme for their weird, unique lens/sensor units.
In the GXR system nomenclature, “A12″ refers just to the sensor’s size (APS-C), and 12-megapixel resolution. But in giving the name of the module, you also must include the lens focal length and f/ratio. (Well. That ought to roll off the tongue effortlessly.)
So the new Ricoh module announced during PMA is also called “A12.” It’s just a different one, with a wider lens. Got that?
What I did not understand is that Ricoh is expressing all these focal lengths in “35mm equivalents.” As I’ve ranted about before, this is a needlessly-confusing convention which perverts the actual meaning of the word “millimeters.”
Anyway. Ricoh’s currently available A12 module is the 50e, f/2.5 one. That is to say, it is an inexplicably-dim “normal” lens, although one with macro focusing capability. DP Review liked the image quality of the 50e/2.5 module; but they found its autofocus speed rather poor.
The GXR module coming up later this year will be a wide-angle 28e version, also f/2.5.
It’s the pricing of the GXR body+module system which makes no sense. You could pick up a Canon S90 and a Panasonic GF1 together for nearly the same money. The only possible justification would be if you were wildly in love with the GXR’s menu system, or its $250 auxiliary electronic viewfinder. The latter scenario is questionable, since Olympus’s VF-2 is generally rated the nicest one currently available.
Also, lens speeds of f/2.8 and f/2.5 are simply unacceptable in this price bracket (are you listening, Leica X1?) Panasonic’s pancake 20/1.7 lens proves that even a bright lens (where the moving glass elements must be heavier) can have fine autofocus performance.
Count me among the many who have said, “Ricoh, I just don’t get it.”