BSI: No Panacea

May 21, 2010

In a few earlier posts I have mentioned the new generation of Sony sensors boasting “back-side illumination,” and marketed as Exmor-R (as distinct from Sony’s conventional sensors, just branded Exmor).

Back-side illumination (BSI in the industry jargon) is a tricky and costly chip-fabrication technique, where after depositing all the wiring traces on a silicon wafer, the substrate is flipped over and almost entirely thinned away. This leaves the wiring on the underside of the light-sensitive photodiodes (as Sony describes here), so these unobstructed pixels will theoretically collect more light.

BSI is promoted as one of the technological breakthroughs which might help save image quality, even as manufacturers race to cram more megapixels into tiny sensor areas. In fact, the IMX050CQK actually scaled back its pixel count to 10 Mp, compared to the 12 and 14 that have been becoming increasingly common in the point & shoot market.

Sony BSI Sensor

A Whizzy Small Sensor is Still A Small Sensor

Sony introduced the chip in its own first models in the fall of 2009, for example in the WX1. But clearly Sony found it advantageous to spread the sensor development costs over a larger production run, and apparently they’ve aggressively marketed the chip to other camera makers as well. Pretty much any 10 Mp camera sold this year advertising a backside-illuminated sensor uses it. It seems particularly popular in today’s nutty “Ultra Zoom” market segment.

So I was interested to read the review just posted by Jeff Keller of Nikon’s P100 ultrazoom camera, which uses this chip. See his conclusions here.

As reviews of these new BSI-based cameras filter out, the word seems to be that they do offer decent image quality—but hardly anything revolutionary. If their high-ISO images look smooth, it seems to be partly thanks to noise reduction processing, which can destroy detail and add unnatural, crayon-like artifacts.

Read the rest of this entry »

More Sensor Nonsense

March 23, 2010

I can’t think of any greater achievement in press-release puffery than having your claims uncritically repeated in the New York Times.

As many of you have heard, a company called InVisage has announced a “disruptive” improvement in imager sensitivity, through applying a film of quantum dot material. The story has been picked up by lots of photography blogs and websites, including DP Review.

But don’t throw your current camera in the trash quite yet. The Times quoted InVisage’s Jess Lee as saying, “we expect to start production 18 months from now”—with the first shipping sensors designed for phone-cam use.

InVisage Prototype

Sensitivity: Good. Hype: Bad

I have no way of knowing if InVisage’s claims will pan out in real-world products. But it’s interesting that a few people who work in the industry have skeptical things to say.

I do find it exaggerated to claim that the new technology is “four times” better than conventional sensors (95% versus 25% efficiency). Backside-illuminated sensors are shipping today which have much higher sensitivity; and refinements to microlenses and fill-factors are continuing.

However one true advantage of the quantum-dot film is that incoming photons slam to a stop within a very shallow layer (just half a micron thick). This is in contrast to conventional photodiodes, where longer-wavelength (redder) photons might need to travel through 6-8 microns of silicon before generating an electron.

That difference might enable sensors without microlenses to absorb light efficiently even from very oblique angles. It would permit lens designs with shorter back focus (as with rangefinder cameras); and thus we could get more compact cameras overall.

Kodak’s full-frame KAF-18500 CCD, used in the Leica M9, could only achieve the same trick by using special offset microlenses. (And if we are to believe this week’s DxO Mark report, that sensor may have compromised image quality in other ways.)

But I’m still slapping my head at the most ridiculous part of this whole story:

To give an example of what the “quantum film” technology would enable, Mr. Lee noted that we could have an iPhone camera with a 12-megapixel sensor.

Can I scream now? Is the highest goal of our civilization trying to cram more megapixels into a phone-cam? And WHY? But the über-iPhone is just nonsense for other reasons, even if we desired it.

As near as I can tell, an iPhone sensor is about 2.7 x 3.6 mm (a tiny camera module needs an extremely tiny chip). Space is so tight that a larger module would be unacceptable. So, to squish 12 Mp into the current sensor size, each pixel would need to be 0.9 microns wide!

An iPhone’s lens works at f/2.8. At that aperture, the size of the Airy disk (for green light) is 3.8 microns. This is the theoretical limit to sharpness, even if the lens is absolutely flawless and free from aberrations. At the micron scale, any image will be a mushy, diffraction-limited mess.

Also, remember that noise is inherent in the randomness of photon arrival. No matter what technology is used, teensy sensors will always struggle against noise. (The current “solution” is processing the image into painterly color smudges.)

And the dynamic range of a sensor is directly related to pixel size. Micron-scale pixels would certainly give blank, blown-out highlights at the drop of a hat.

But let’s be optimistic that, eventually, this technology will migrate to more sensible pixel sizes. Even if the sensitivity increase only turns out to be one f/stop or so, it would still be welcome. A boost like that could give a µ4/3-sized sensor very usable ISO-1600 performance.

But before we proclaim the revolution has arrived, we’ll need to wait for a few more answers about InVisage’s technology:

  • Is the quantum-dot film lightfast, or does it deteriorate over time?
  • Is there crosstalk/bleeding between adjacent pixels?
  • Is the sensitivity of the quantum-dot film flat across the visible spectrum, or significantly “peaked”? (This would affect color reproduction)
  • How reflective is the surface of the film? (Antireflection coatings could be used—but they affect sharpness and angular response)

So, stay tuned for more answers, maybe in the summer of 2011…

If you’re interested in a behind-the-scenes peek into the imaging-chip industry, check out the blog “Image Sensors World.”

Much of this revolves around cell-phone cameras, which today are by far the largest consumer of imaging chips. And that’s a market where the drive for miniaturization is even more extreme than with point & shoot cameras. For a phone-cam to boast 2 megapixels, 4 megapixels, or more, each pixel must be tiny.

And it’s finally happened: A company called OmniVision has introduced the industry’s first 1.1 micron pixel. That’s about 60% of the area of today’s typical point & shoot pixels.

At that scale, the light-gathering area of each pixel is so minuscule that back-side illumination practically becomes mandatory. The reasons are well explained in OmniVision’s “technology backgrounder” PDF.

Back Side Illumination

OmniVision Explains Back Side Illumination

This document’s introduction says,

“Evidently, pixels are getting close to some fundamental physical size limits. With the development of smaller pixels, engineers are asked to pack in as many pixels as possible, often sacrificing image quality.”

Which is an amusingly candid thing to say—considering that they are selling the aforementioned chips packed with “as many pixels as possible.”

What are these “fundamental limits”? Strangely, OmniVision’s document never once mentions the word “diffraction.” But as I’ve sputtered about before, with pixels the size of bacteria, diffraction becomes a serious limitation.

Because of light’s wavelike nature, even an ideal, flawless lens cannot focus light to a perfect point. Instead, you get a microscopic fuzzy blob called the Airy disk.

Now, calling it a “disk” is slightly deceptive: It is significantly brighter in the center than at the edge. Thus, there is still some information to extract by having pixels smaller than the Airy disk. But by the time the Airy disk covers many pixels, no further detail is gained by “packing in” additional ones.

Our eyes are most sensitive to light in the color green. For this wavelength, the Airy disk diameter in microns is the f/ratio times 1.35. (In practice, lens aberrations will make the blur spot larger than this diffraction limit.)

But even using a perfect lens that is diffraction-limited at f/2.3, the Airy disk would cover four 1.1 micron pixels.

Airy Disk versus Pixels

Pixels much smaller than the Airy Disk add no detail

A perfect lens working at f/3.5 (which is more realistic for most zooms) will have an Airy disk covering nine pixels of 1.1 micron width. This is one of the “fundamental physical size limits” mentioned in OmniVision’s document.

Manufacturing a back-illuminated chip is quite complex. And for OmniVision to be able to crank them out in quantity is a technological tour de force. As I wrote earlier, there are still a few tweaks left to make imaging chips more sensitive per unit area; this is one of them.

Perhaps this helps explain another curiously candid statement I saw recently.  Sony executive Masashi “Tiger” Imamura was discussing the “megapixel race” in a PMA interview with Imaging Resource. And he said,

” …making the pixel smaller on the imager, requires a lot of new technology development. […] So, as somebody said, the race was not good for the customers, but on the other hand, good for us to develop the technologies. Do you understand?”

A Quick “Swindle” Footnote

February 19, 2010

The stats show that my post, “the Great Megapixel Swindle,” continues to get quite a lot of traffic. That’s a little unnerving, given that it was written as a quick, off-the-cuff tantrum. If I’d known how many folks would read it, I would have said several things more precisely.

Around the internet, the “Swindle” spawned many, many discussion threads—I can’t keep track of them all. I did try to respond to many of the questions and misunderstandings that I was seeing, in a followup post here. And I’ve expanded on the same issues in many other posts as well.

Today I’ve noticed a thread over at Rangefinder Forum which raises the question, “isn’t it unfair to use a crop from the background? Naturally that looks bad, since it’s out of focus.”

Noise Reduction Watercolors

Impressionistic Noise Reduction

First, the real point this example makes is this: Cameras with tiny pixels must use aggressive post-processing to reduce noise; and this can cause strange, unnatural-looking artifacts. (There’s more on the subject here.)

But on the question of depth of field, I should clarify a bit.

The EXIF data for this shot shows the Olympus FE-26 was set at the wide end of its zoom range—namely 6.3mm. Such a short focal length implies extreme depth of field. The f/stop used was f/4.6.

The H. Lee White is a 700-foot-long Great Lakes freighter. I’m not exactly sure how far the camera was from the subject; but it’s doubtful that it was closer than 15 feet.

Now, we can’t blindly apply standard depth-of-field tables here. The standard calculation (e.g. if you scroll down to Olympus FE-26 here) uses a circle of confusion of 0.005 mm for this sensor format.

But when you are looking at such an extreme enlargement, the assumptions behind that break down (CoC is generally referenced to viewing an 8×10 print at a moderate distance).

But consider that this camera has a sensor size of about 6 x 4.5mm, and so each individual pixel is 1.53 microns wide. In other words, 0.00153 mm.

Clearly, the meaningful circle of confusion can’t be smaller than one pixel. Given the resolution loss that happens with Bayer interpolation, 0.002 mm seems a realistic CoC.

FE-26 Depth of Field Calculation

FE-26 Depth of Field Calculation

So the out-of-focus blur is actually negligible compared to the pixel size in this case.

You can use an alternate depth of field calculator which lets you input arbitrary values if you’d like to explore this further yourself.

Throw Bits At It? Not Always.

February 16, 2010

Today, some musings that are a bit more (har har) abstract.

Sometimes we become numbed with the perpetual escalation of tech specs. The mindset of the computer industry, where each new generation promises more, faster & bigger, seems to be the new normal.

And it can become a self-fulfilling prophecy. If it is technically possible to ratchet up some specs number, inevitably we’ll choose to. That’s what keeps people buying!

But there’s one nice example of an industry that dangled ever-increasing numbers in front of consumers—who then yawned and said “no thanks.”

How many DVD-Audio or “Super Audio CD” titles do you own? (Okay, I’m sure someone out there is an enthusiastic adopter. But I mean, the average person.)

SACD Player

Did You Buy SACD? Me Either.

The original standard for music CDs uses 44,100 samples per second. Each sound sample has 16 bits; meaning it encodes the full range from soft to loud with about 65,000 discrete levels.

The CD standard was adopted around 1980; so at that time there was pressure to keep the bit rate low enough so that the player electronics would not be prohibitively expensive.

No one knew how data-handling ability would explode over the following decades. When you see a speed “60x” or “133x” on a camera memory card, those are referenced against the original CD-player (or, 1x CD-ROM) bit rate.

So in the audiophile world there were grumbles from the start that the CD bit rate was insufficient.

With only 16 bits, the quietest elements of music (like note decays and room ambience) must be recorded with somewhat coarse resolution. It’s similar to how shadow areas in digital photos can look noisier than highlights. A standard that used 20 bits or so would have preserved more fine “texture” in low-amplitude sounds.

And before digitizing sound for CD, any frequency of 22,050 cycles/second or higher must get chopped off. A high-pitched rising chirp that went past that frequency would make the A/D converter miss the true peaks and troughs of the wave; it would misrepresent it as a falling note, back down in the audible spectrum.

For CD audio, 22 kHz is the “Nyquist Limit,” and you need an “antialias filter” to block higher frequencies (yes, these exact same principles apply to digital-camera sensors). But there were complaints that the antialias filters degraded sound quality.

And while most adults can’t hear steady pitches much above 16 kHz, very brief clicks at a higher theoretical frequency might contribute some “edge” to percussion and note attacks. A lot of professionals preferred to record at 48 kHz instead. (You can go higher today, though bandwidth limitations in the analog realm become significant.)

Folks who produce music may have reasons for 24-bit sampling (tracks are often put through computation-intensive effects; you don’t want rounding errors), but 20-bit delivery covers an excellent dynamic range. Even taking a 20-bit master and dithering it down to a 16-bit release version can work well.

I apologize to all of you whose eyes glazed over during those past few paragraphs. The truth is, most of us found CD sound quality perfectly adequate.

If you do the math, the bit rate used for CD sound is about 1.4 Mbit/sec (in stereo). A reasonable standard that would have handled any outstanding quality issues might have been 20 bits at 48 kHz. That works out to 1.9 Mbit/sec.

But the arrival of DVD technology offered a huge increase in disc capacity. It was a great opportunity to sell a newer, zingier, whizzier-spec music format too.

So a “format war” broke out, but based on bit rates that were sort of crazy. Stereo in Sony’s SACD standard burns up 5.6 Mbit/sec—quadrupling the CD rate. DVD-A is a “family” of standards (a standard that doesn’t standardize); but its highest supported stereo rate is 9.2 Mbits/sec!

A leading writer in the digital-audio field once told me in an email that the reasons for these bit rates had more to do with “quieting the lunatic fringe” than with any technical justification.

But the public treated these new audio formats with indifference. SACD has found a niche in classical music, but most folks are completely unaware of it. (Even though, soon enough, millions did go out to buy a new disc format: Blu-ray.)

You all know what actually happened with music: Buying it online, and being able to take it everywhere in your pocket, totally changed the game.

iPod Mini

What I, And Probably You, Really Bought

So with downloadable music, the bit rate actually plunged instead. Today we’re buying music that uses 1/5th or even 1/8th the old CD standard! (Psychoacoustically intelligent data compression makes it possible.)

So… does this have anything to do with photography?

Camera manufacturers today (even those making enthusiast models) continue to use megapixels as the spec that defines “improvement.” Every year, more bits!

This shows a depressing lack of creativity. Past the point where this offers any real value, it’s just mindlessly chasing a number.

What we need is a serious rethink—to create something so novel and desirable that any talk about pixel count become irrelevant.

My feeling is that a game-changer for digital cameras is radical improvements in low-light capability. (This parallels the opinions of that recent Gizmodo article.)

We’ve suffered through decades of terrible point & shoots—whose slow zooms and limited sensitivity demanded nuclear-blast electronic flash for every indoor shot.

Flash is blinding, conspicuous, annoying to bystanders, and quite rightly prohibited at most museums and concerts. It drags out the lag time before shots.

It’s also a form of lighting which makes people look like shit.

Consider the fraction of our days we spend indoors, often under marginal illumination. But living rooms, restaurants, etc.—isn’t this where our real lives happen? Wouldn’t it be amazing to record those moments realistically, accurately, but without blinding and ugly flash?

What if you had a camera that could shoot at ISO 1600, cleanly? What about a camera where anti-shake let you trust shooting at 1/15th sec.? Plus a lens of f/1.7 or f/1.4—scooping up four times as much light?

Then, people could take photos freaking anywhere. Without flash. The light of a single candle is enough! (That’s LV 2, if you’re wondering.)

Now, to get a lens that fast, we’d probably need to lose the zoom.

Oh noes! Cameramakers’ second most-flogged spec number is zoom range. Our 12x is better than their 10x! I can hear the howls already: “Who would buy a camera without a zoom?”

Well, how many people use cell phones as their main camera today? Those have no optical zoom.

Some used to ask, “who would pay for a compressed MP3 when you can own the real physical disc?”

People who appreciate convenience. People who want technology to fit into their real lives. Us.

Some Theory About Noise

February 14, 2010

The first thing to understand about picture noise (aka grain, speckles) is that it’s already present in the optical image brought into focus on the sensor.

Even when you photograph something featureless and uniform like a blank sky, the light falling onto the sensor isn’t creamy and smooth, like mayonnaise. At microscopic scales, it’s lumpy & gritty.

This is because light consists of individual photons. They sprinkle across the sensor at somewhat random timings and spacings. And eventually you get down to the scale where one tiny area might receive no photons at all, even as its neighbor receives many.

Perhaps one quarter of the photons striking the sensor release an electron—which is stored, then counted by the camera after the exposure. This creates the brightness value recorded for each pixel. (There is also a bit of circuit noise sneaking in, mostly affecting the darkest parts of the image.)

But no matter how carefully a camera is constructed, it is subject to photon noise—sometimes called “shot noise.” You might also hear some murmurs about Poisson statistics, the math describing how this noise is distributed.

When you start from a focused image that’s tiny (as happens with point & shoot cameras), then magnify it by dozens of times, the more noticeable this inherent noise becomes:

Compact Camera Noise Sample

Small Sensors Struggle With Noise

In fact, the only way to reduce the speckles is to average them out over some larger area. However, the best method for doing this requires some consideration.

The most obvious solution is this: You decide what is the minimum spatial resolution needed for your uses (i.e., what pixel count), then simply make each pixel the largest area permissible. Bigger pixels = more photon averaging.

Easy.

Lets recall that quite a nice 8 x 10″ print can be made from a 5 Mp image. The largest inkjet printers bought by ordinary citizens print at 13 inches wide; 9 Mp suffices for this, at least at arm’s length. And any viewing on a computer screen requires far fewer pixels still.

The corollary is that when a photographer does require more pixels (and a few do), you increase the sensor size, not shrink the pixels. For a given illumination level (a particular scene brightness and f/stop) the larger sensor will simply collect more photons in total—allowing better averaging of the photon noise.

But say we take our original sensor area, then subdivide it into many smaller, but noisier, pixels. Their photon counts are bobbling all over the place now! The hope here is that later down the road, we can blend them in some useful way that reduces noise.

One brute-force method is just applying a small-radius blur to the more pixel-dense image. However this will certainly destroy detail too. It’s not clear what advantage this offers compared to starting from a crisp, lower-megapixel image (for one thing, the file size will be larger).

Today, the approach actually taken is to start with the noisier high-megapixel image, then run sophisticated image-processing routines on it. Theoretically, smart algorithms can enhance true detail, like edges; while smoothing shot noise in the areas that are deemed featureless.

This is done on every current digital camera. Yet it must be done much more aggressively when using tiny sensors, like those in compact models or phone-cams.

One argument is that by doing this, we’ve simply turned the matter into a software problem. Throw in Moore’s law, plus ever-more-clever programming, and we may get a better result than the big-pixel solution. I associate the name Nathan Myrvold with this form of techno-optimism (e.g. here). Assuming the files were saved in raw format, you might even go back and improve photos shot in the past.

But it’s important to note the limits of this image-processing, as they apply to real cameras today.

Most inexpensive cameras do not give the option of saving raw sensor files. So before we actually see the image, the camera’s processor chip puts it through a series of steps:

Bayer demosaicing —> Denoising & Sharpening —> JPEG compression

Remember that each pixel on the sensor has a different color filter over it. The true color image must be reconstructed using some interpolation.

The problem is that photon noise affects pixels randomly—without regard to their assigned color. If it happens (and statistically, it will) that several nearby “red” pixels end up too bright (because of random fluctuations), the camera can’t distinguish this from a true red detail within the subject. So, false rainbow blobs can propagate on scales much larger than individual pixels:

100% Color Noise Sample

100% Crop of Color Noise

The next problem is that de-noising and sharpening actually tug in opposite directions. So the camera must make an educated guess about what is a true edge, sharpen that, then blur the rest.

This works pretty well when the processor finds a crisp, high-contrast outline. But low-contrast details (which are very important to our subjective sense of texture) can simply be smudged away.

The result can be a very unnatural, “watercolors” effect. Even when somewhat sharp-edged, blobs of color will be nearly featureless inside those outlines.

Remember this?

Noise Reduction Watercolors

Impressionistic Noise Reduction

Or, combined with a bit more aggressive sharpening, you might get this,

Sharp Smudge Sample

Pastel Crayons This Time?

Clearly, the camera’s guesses about what is true detail can fail in many real-world situations.

There’s an excellent technical PDF from the DxO Labs researchers, discussing (and attempting to quantify) this degradation. Their research was originally oriented towards cell-phone cameras (where these issues are even more severe); but the principles apply to any small-sensor camera dependent on algorithmic signal recovery.

Remember that image processing done within the camera must trade off sophistication against speed and battery consumption. Otherwise, camera performance becomes unacceptable. And larger files tax the write-speed and picture capacity of memory cards; they also take longer to load and edit in our computers.

So there is still an argument for taking the conservative approach.

We can subdivide sensors into more numerous, smaller pixels. But we should stop at the point which is sufficient for our purposes, in order to minimize reliance on this complex, possibly-flawed software image wrangling.

And when aberrations & diffraction limit the pixel count which is actually useful, the argument becomes even stronger.

January and February are months when the air hangs thick with new-camera introductions.

DP Review went a bit lightheaded keeping track of them all; but now, they’ve updated their camera database to reflect the latest unveilings and announcements.

We’re also approaching this country’s wildest Lost Weekend of photo-equipment marketing, PMA 2010, which starts February 19th.

So, it’s the right moment to look our camera industry straight in the bleary eye, and ask the hard question. Are you on drugs?

Crack Detail

"I can stop adding megapixels any time"

Regular readers of this blog know my arguments well: Overdosing cameras with millions of teensy pixels is risky behavior—in fact, irrational and damaging.

Not unlike drug abuse.

But to survey the full breadth of this scourge, I’ve needed to pore over DP Review’s specs listings—noting the pixel density of every model introduced since January, 2008. (There were almost 400 in total.)

No doubt I’ve missed some models somewhere, or copied some numbers wrong. But I’ve made a sincere attempt to find out: Which brand has the worst megapixel-monkey on its back?

Here’s how it works:

  • Camera models of 35 Mp/sq. cm or more (but less than 40) earn one crack pipe
  • Models having 40 Mp/sq. cm or above (but below 45) get two crack pipes
  • Any model with 45 Mp/sq. cm or “higher” is awarded the unprecedented three crack pipes.

But a camera maker can redeem themselves, somewhat. All I want to see is evidence they’re entering rehab and doing community service.

That is,

  • Any model with a pixel density of 5 to 15 Mp/sq. cm subtracts one crack pipe
  • A camera having 2.5 to 5 Mp/sq. cm knocks off two pipes
  • A model of less than 2.5 Mp/sq. cm expunges three whole pipes.

[Note: Currently, the last category only includes Nikon’s high-end D700 and D3s. The Micro Four Thirds models included are all a whisker over 5 Mp/sq. cm.]

First, we must single out Sigma—Boy Scout among camera makers.

Better known for their lenses, they have a small lineup of cameras using the Foveon sensor, which is 20.7 x 13.8mm.

This makes them the only current camera manufacturer to be 100% CRACK FREE. We may find their products a bit geeky and lacking in social graces, but at least they’re leading the clean life.

But for the others, it’s a grim tragedy. In reverse order of crackheadedness, here is the ranking:

  • Ricoh: 2 crack pipes
  • Pentax: 12 crack pipes
  • Tied, with 24 crack pipes each: Sony and Kodak

I must interrupt here to mention that Sony’s crack score should have been 20 points higher—except that, like an agitated street person muttering “I’m getting my life together!”, Sony somehow introduced eleven different DSLR models in the past two years.

But I’m going to let that slide. Sony has at least admitted a problem. Their new (and unproven) detox plan involves a medication named “Exmor R,” and a risky procedure called “back illumination.”

Sadly, we all know how fragile recovery can be.

  • Okay. Back to Nikon: 25 crack pipes
  • Fujifilm: 31
  • Casio: 34
  • Canon: 37

(Canon does earn a special “we’re getting help” mention—for tapering off their S90 and G11 models to a slightly lower dose of 10 mg. Er… Mp.)

  • Samsung: 39

Samsung! Snap out of it! There’s still time to go home to your family, bringing more NX-mount cameras. I am speaking to you as a concerned friend.

And finally—we get to the two saddest cases in the whole megapixel ward.

Like many addicts, they always seemed able to hold it together in public. But the numbers don’t lie.

  • Yes, Olympus: 54 crack pipes

…and perhaps most shocking,

  • Panasonic: 67 crack pipes
67 Crack Pipes

This is Panasonic's Brain on Drugs

Tell me it’s not true. The two leaders of Micro Four Thirds? The upstanding citizens who gathered us all in the church basement, to spread the good news about large chips in compact cameras—living a lie?

It’s tragic. One day, you’re a respected member of your industry. Then suddenly, you’re passed out in a seedy motel, wearing nothing but a frayed terry-cloth robe, surrounded by crumpled marketing plans.

Camera makers: There is still time to clean up, and save yourselves.

Follow

Get every new post delivered to your Inbox.