More Sensor Nonsense

March 23, 2010

I can’t think of any greater achievement in press-release puffery than having your claims uncritically repeated in the New York Times.

As many of you have heard, a company called InVisage has announced a “disruptive” improvement in imager sensitivity, through applying a film of quantum dot material. The story has been picked up by lots of photography blogs and websites, including DP Review.

But don’t throw your current camera in the trash quite yet. The Times quoted InVisage’s Jess Lee as saying, “we expect to start production 18 months from now”—with the first shipping sensors designed for phone-cam use.

InVisage Prototype

Sensitivity: Good. Hype: Bad

I have no way of knowing if InVisage’s claims will pan out in real-world products. But it’s interesting that a few people who work in the industry have skeptical things to say.

I do find it exaggerated to claim that the new technology is “four times” better than conventional sensors (95% versus 25% efficiency). Backside-illuminated sensors are shipping today which have much higher sensitivity; and refinements to microlenses and fill-factors are continuing.

However one true advantage of the quantum-dot film is that incoming photons slam to a stop within a very shallow layer (just half a micron thick). This is in contrast to conventional photodiodes, where longer-wavelength (redder) photons might need to travel through 6-8 microns of silicon before generating an electron.

That difference might enable sensors without microlenses to absorb light efficiently even from very oblique angles. It would permit lens designs with shorter back focus (as with rangefinder cameras); and thus we could get more compact cameras overall.

Kodak’s full-frame KAF-18500 CCD, used in the Leica M9, could only achieve the same trick by using special offset microlenses. (And if we are to believe this week’s DxO Mark report, that sensor may have compromised image quality in other ways.)

But I’m still slapping my head at the most ridiculous part of this whole story:

To give an example of what the “quantum film” technology would enable, Mr. Lee noted that we could have an iPhone camera with a 12-megapixel sensor.

Can I scream now? Is the highest goal of our civilization trying to cram more megapixels into a phone-cam? And WHY? But the über-iPhone is just nonsense for other reasons, even if we desired it.

As near as I can tell, an iPhone sensor is about 2.7 x 3.6 mm (a tiny camera module needs an extremely tiny chip). Space is so tight that a larger module would be unacceptable. So, to squish 12 Mp into the current sensor size, each pixel would need to be 0.9 microns wide!

An iPhone’s lens works at f/2.8. At that aperture, the size of the Airy disk (for green light) is 3.8 microns. This is the theoretical limit to sharpness, even if the lens is absolutely flawless and free from aberrations. At the micron scale, any image will be a mushy, diffraction-limited mess.

Also, remember that noise is inherent in the randomness of photon arrival. No matter what technology is used, teensy sensors will always struggle against noise. (The current “solution” is processing the image into painterly color smudges.)

And the dynamic range of a sensor is directly related to pixel size. Micron-scale pixels would certainly give blank, blown-out highlights at the drop of a hat.

But let’s be optimistic that, eventually, this technology will migrate to more sensible pixel sizes. Even if the sensitivity increase only turns out to be one f/stop or so, it would still be welcome. A boost like that could give a µ4/3-sized sensor very usable ISO-1600 performance.

But before we proclaim the revolution has arrived, we’ll need to wait for a few more answers about InVisage’s technology:

  • Is the quantum-dot film lightfast, or does it deteriorate over time?
  • Is there crosstalk/bleeding between adjacent pixels?
  • Is the sensitivity of the quantum-dot film flat across the visible spectrum, or significantly “peaked”? (This would affect color reproduction)
  • How reflective is the surface of the film? (Antireflection coatings could be used—but they affect sharpness and angular response)

So, stay tuned for more answers, maybe in the summer of 2011…

Advertisements

7 Responses to “More Sensor Nonsense”

  1. Chippets Says:

    I can envisage the marketing vp demanding “but how does this product increase cellphone pixel counts?!” and the engineers thinking “how can I get this guy to leave me alone!”

  2. petavoxel Says:

    Thanks to (iPhone photographer) Chippets for pointing out that NYT article to me!

  3. petavoxel Says:

    Most of the sensor-designer jargon goes over my head… But for anyone interested, Image Sensors World has just posted a bit more analysis of the InVisage technology.


  4. Glad to see your response to this announcement. When I saw this announced a few days ago, my first response was to ask the following question: I wonder what Petavoxel will have to say about this?

  5. Huggs Says:

    “The current ‘solution’ is processing the image into painterly color smudges.”

    So the iPhone is a $200 PS filter when it comes to taking pics? 😀

  6. Jon Rista Says:

    Excellent writeup! Thanks for the logical approach. We need more of this in the tech journalism world.


Comments are closed.

%d bloggers like this: