More Sensor Nonsense

March 23, 2010

I can’t think of any greater achievement in press-release puffery than having your claims uncritically repeated in the New York Times.

As many of you have heard, a company called InVisage has announced a “disruptive” improvement in imager sensitivity, through applying a film of quantum dot material. The story has been picked up by lots of photography blogs and websites, including DP Review.

But don’t throw your current camera in the trash quite yet. The Times quoted InVisage’s Jess Lee as saying, “we expect to start production 18 months from now”—with the first shipping sensors designed for phone-cam use.

InVisage Prototype

Sensitivity: Good. Hype: Bad

I have no way of knowing if InVisage’s claims will pan out in real-world products. But it’s interesting that a few people who work in the industry have skeptical things to say.

I do find it exaggerated to claim that the new technology is “four times” better than conventional sensors (95% versus 25% efficiency). Backside-illuminated sensors are shipping today which have much higher sensitivity; and refinements to microlenses and fill-factors are continuing.

However one true advantage of the quantum-dot film is that incoming photons slam to a stop within a very shallow layer (just half a micron thick). This is in contrast to conventional photodiodes, where longer-wavelength (redder) photons might need to travel through 6-8 microns of silicon before generating an electron.

That difference might enable sensors without microlenses to absorb light efficiently even from very oblique angles. It would permit lens designs with shorter back focus (as with rangefinder cameras); and thus we could get more compact cameras overall.

Kodak’s full-frame KAF-18500 CCD, used in the Leica M9, could only achieve the same trick by using special offset microlenses. (And if we are to believe this week’s DxO Mark report, that sensor may have compromised image quality in other ways.)

But I’m still slapping my head at the most ridiculous part of this whole story:

To give an example of what the “quantum film” technology would enable, Mr. Lee noted that we could have an iPhone camera with a 12-megapixel sensor.

Can I scream now? Is the highest goal of our civilization trying to cram more megapixels into a phone-cam? And WHY? But the über-iPhone is just nonsense for other reasons, even if we desired it.

As near as I can tell, an iPhone sensor is about 2.7 x 3.6 mm (a tiny camera module needs an extremely tiny chip). Space is so tight that a larger module would be unacceptable. So, to squish 12 Mp into the current sensor size, each pixel would need to be 0.9 microns wide!

An iPhone’s lens works at f/2.8. At that aperture, the size of the Airy disk (for green light) is 3.8 microns. This is the theoretical limit to sharpness, even if the lens is absolutely flawless and free from aberrations. At the micron scale, any image will be a mushy, diffraction-limited mess.

Also, remember that noise is inherent in the randomness of photon arrival. No matter what technology is used, teensy sensors will always struggle against noise. (The current “solution” is processing the image into painterly color smudges.)

And the dynamic range of a sensor is directly related to pixel size. Micron-scale pixels would certainly give blank, blown-out highlights at the drop of a hat.

But let’s be optimistic that, eventually, this technology will migrate to more sensible pixel sizes. Even if the sensitivity increase only turns out to be one f/stop or so, it would still be welcome. A boost like that could give a µ4/3-sized sensor very usable ISO-1600 performance.

But before we proclaim the revolution has arrived, we’ll need to wait for a few more answers about InVisage’s technology:

  • Is the quantum-dot film lightfast, or does it deteriorate over time?
  • Is there crosstalk/bleeding between adjacent pixels?
  • Is the sensitivity of the quantum-dot film flat across the visible spectrum, or significantly “peaked”? (This would affect color reproduction)
  • How reflective is the surface of the film? (Antireflection coatings could be used—but they affect sharpness and angular response)

So, stay tuned for more answers, maybe in the summer of 2011…

Some Theory About Noise

February 14, 2010

The first thing to understand about picture noise (aka grain, speckles) is that it’s already present in the optical image brought into focus on the sensor.

Even when you photograph something featureless and uniform like a blank sky, the light falling onto the sensor isn’t creamy and smooth, like mayonnaise. At microscopic scales, it’s lumpy & gritty.

This is because light consists of individual photons. They sprinkle across the sensor at somewhat random timings and spacings. And eventually you get down to the scale where one tiny area might receive no photons at all, even as its neighbor receives many.

Perhaps one quarter of the photons striking the sensor release an electron—which is stored, then counted by the camera after the exposure. This creates the brightness value recorded for each pixel. (There is also a bit of circuit noise sneaking in, mostly affecting the darkest parts of the image.)

But no matter how carefully a camera is constructed, it is subject to photon noise—sometimes called “shot noise.” You might also hear some murmurs about Poisson statistics, the math describing how this noise is distributed.

When you start from a focused image that’s tiny (as happens with point & shoot cameras), then magnify it by dozens of times, the more noticeable this inherent noise becomes:

Compact Camera Noise Sample

Small Sensors Struggle With Noise

In fact, the only way to reduce the speckles is to average them out over some larger area. However, the best method for doing this requires some consideration.

The most obvious solution is this: You decide what is the minimum spatial resolution needed for your uses (i.e., what pixel count), then simply make each pixel the largest area permissible. Bigger pixels = more photon averaging.

Easy.

Lets recall that quite a nice 8 x 10″ print can be made from a 5 Mp image. The largest inkjet printers bought by ordinary citizens print at 13 inches wide; 9 Mp suffices for this, at least at arm’s length. And any viewing on a computer screen requires far fewer pixels still.

The corollary is that when a photographer does require more pixels (and a few do), you increase the sensor size, not shrink the pixels. For a given illumination level (a particular scene brightness and f/stop) the larger sensor will simply collect more photons in total—allowing better averaging of the photon noise.

But say we take our original sensor area, then subdivide it into many smaller, but noisier, pixels. Their photon counts are bobbling all over the place now! The hope here is that later down the road, we can blend them in some useful way that reduces noise.

One brute-force method is just applying a small-radius blur to the more pixel-dense image. However this will certainly destroy detail too. It’s not clear what advantage this offers compared to starting from a crisp, lower-megapixel image (for one thing, the file size will be larger).

Today, the approach actually taken is to start with the noisier high-megapixel image, then run sophisticated image-processing routines on it. Theoretically, smart algorithms can enhance true detail, like edges; while smoothing shot noise in the areas that are deemed featureless.

This is done on every current digital camera. Yet it must be done much more aggressively when using tiny sensors, like those in compact models or phone-cams.

One argument is that by doing this, we’ve simply turned the matter into a software problem. Throw in Moore’s law, plus ever-more-clever programming, and we may get a better result than the big-pixel solution. I associate the name Nathan Myrvold with this form of techno-optimism (e.g. here). Assuming the files were saved in raw format, you might even go back and improve photos shot in the past.

But it’s important to note the limits of this image-processing, as they apply to real cameras today.

Most inexpensive cameras do not give the option of saving raw sensor files. So before we actually see the image, the camera’s processor chip puts it through a series of steps:

Bayer demosaicing —> Denoising & Sharpening —> JPEG compression

Remember that each pixel on the sensor has a different color filter over it. The true color image must be reconstructed using some interpolation.

The problem is that photon noise affects pixels randomly—without regard to their assigned color. If it happens (and statistically, it will) that several nearby “red” pixels end up too bright (because of random fluctuations), the camera can’t distinguish this from a true red detail within the subject. So, false rainbow blobs can propagate on scales much larger than individual pixels:

100% Color Noise Sample

100% Crop of Color Noise

The next problem is that de-noising and sharpening actually tug in opposite directions. So the camera must make an educated guess about what is a true edge, sharpen that, then blur the rest.

This works pretty well when the processor finds a crisp, high-contrast outline. But low-contrast details (which are very important to our subjective sense of texture) can simply be smudged away.

The result can be a very unnatural, “watercolors” effect. Even when somewhat sharp-edged, blobs of color will be nearly featureless inside those outlines.

Remember this?

Noise Reduction Watercolors

Impressionistic Noise Reduction

Or, combined with a bit more aggressive sharpening, you might get this,

Sharp Smudge Sample

Pastel Crayons This Time?

Clearly, the camera’s guesses about what is true detail can fail in many real-world situations.

There’s an excellent technical PDF from the DxO Labs researchers, discussing (and attempting to quantify) this degradation. Their research was originally oriented towards cell-phone cameras (where these issues are even more severe); but the principles apply to any small-sensor camera dependent on algorithmic signal recovery.

Remember that image processing done within the camera must trade off sophistication against speed and battery consumption. Otherwise, camera performance becomes unacceptable. And larger files tax the write-speed and picture capacity of memory cards; they also take longer to load and edit in our computers.

So there is still an argument for taking the conservative approach.

We can subdivide sensors into more numerous, smaller pixels. But we should stop at the point which is sufficient for our purposes, in order to minimize reliance on this complex, possibly-flawed software image wrangling.

And when aberrations & diffraction limit the pixel count which is actually useful, the argument becomes even stronger.

Some Very Nice Pixels

January 24, 2010

Lets say you want to own the ultimate digital camera. You want the image quality to be absolutely state-of-the-art; something that can see even better than the human eye.

And you have a little money to spend. Like, 132 million dollars. So you’re free to really go crazy with the design. You can have as many pixels as you feel like!

You hire yourself some really smart engineers, and you go to work. So, what kind of camera specs do you end up with?

Well, the little gadget I’m referring to is the “Wide Field Camera 3.” It was installed during the Hubble Space Telescope repair mission last May (after a several-year delay, while the Space Shuttle fleet was grounded).

And the specs are kind of interesting. Does it boast a bazillion megapixels? It does not.

The larger of the two detectors, the wideband UVIS, includes 16.8 megapixels (4102 x 4096). Considering that back down here on earth, you can buy a 56 Mp digital back for a mere $30,000, perhaps that’s surprising.

The secret is that each pixel is really, really big: 15 microns wide. That gives each one quite a large area to absorb light; and it allows each pixel to store lots of electrons (for a wide dynamic range from darkest to brightest).

Each UVIS pixel is about 7 or 8 times the area of the ones in your typical consumer DSLR. But compared to mini point & shoots, it’s even more ridiculous. Their teensy pixels are about 1/100th the area.

Anyway, the WFC3 does take pretty nice pictures:

Sample from Hubble Wide Field Camera 3

From the Hubble Telescope's Wide Field Camera 3

Even when price is no object, remember it is still fiendishly hard to make a large CCD completely free from flaws. Instead of one big chip, WFC3 needed to use two, pushed side-by-side.

And even the chips carefully selected to fly in the UVIS instrument have some pretty obvious scars (er, “prominent local structures”) that must be subtracted out of the final image.

Anyway, if you decide to you can’t live without this ultimate in digital cameras, I do have a couple of cautions:

First, it’s about the size of a piano. Second, the lens will cost extra.

And even though delivering the camera was a little slow, it wasn’t exactly cheap. You better budget a little more for shipping: 1.1 billion dollars.

At that price, you might as well go back to shooting film.