If you’re interested in a behind-the-scenes peek into the imaging-chip industry, check out the blog “Image Sensors World.”

Much of this revolves around cell-phone cameras, which today are by far the largest consumer of imaging chips. And that’s a market where the drive for miniaturization is even more extreme than with point & shoot cameras. For a phone-cam to boast 2 megapixels, 4 megapixels, or more, each pixel must be tiny.

And it’s finally happened: A company called OmniVision has introduced the industry’s first 1.1 micron pixel. That’s about 60% of the area of today’s typical point & shoot pixels.

At that scale, the light-gathering area of each pixel is so minuscule that back-side illumination practically becomes mandatory. The reasons are well explained in OmniVision’s “technology backgrounder” PDF.

Back Side Illumination

OmniVision Explains Back Side Illumination

This document’s introduction says,

“Evidently, pixels are getting close to some fundamental physical size limits. With the development of smaller pixels, engineers are asked to pack in as many pixels as possible, often sacrificing image quality.”

Which is an amusingly candid thing to say—considering that they are selling the aforementioned chips packed with “as many pixels as possible.”

What are these “fundamental limits”? Strangely, OmniVision’s document never once mentions the word “diffraction.” But as I’ve sputtered about before, with pixels the size of bacteria, diffraction becomes a serious limitation.

Because of light’s wavelike nature, even an ideal, flawless lens cannot focus light to a perfect point. Instead, you get a microscopic fuzzy blob called the Airy disk.

Now, calling it a “disk” is slightly deceptive: It is significantly brighter in the center than at the edge. Thus, there is still some information to extract by having pixels smaller than the Airy disk. But by the time the Airy disk covers many pixels, no further detail is gained by “packing in” additional ones.

Our eyes are most sensitive to light in the color green. For this wavelength, the Airy disk diameter in microns is the f/ratio times 1.35. (In practice, lens aberrations will make the blur spot larger than this diffraction limit.)

But even using a perfect lens that is diffraction-limited at f/2.3, the Airy disk would cover four 1.1 micron pixels.

Airy Disk versus Pixels

Pixels much smaller than the Airy Disk add no detail

A perfect lens working at f/3.5 (which is more realistic for most zooms) will have an Airy disk covering nine pixels of 1.1 micron width. This is one of the “fundamental physical size limits” mentioned in OmniVision’s document.

Manufacturing a back-illuminated chip is quite complex. And for OmniVision to be able to crank them out in quantity is a technological tour de force. As I wrote earlier, there are still a few tweaks left to make imaging chips more sensitive per unit area; this is one of them.

Perhaps this helps explain another curiously candid statement I saw recently.  Sony executive Masashi “Tiger” Imamura was discussing the “megapixel race” in a PMA interview with Imaging Resource. And he said,

” …making the pixel smaller on the imager, requires a lot of new technology development. […] So, as somebody said, the race was not good for the customers, but on the other hand, good for us to develop the technologies. Do you understand?”

Advertisements

DIY Resolution Target

February 2, 2010

I know you’re all pacing a groove in the carpet, waiting for hot news about this month’s upcoming micro Four Thirds cameras. I am too.

But to give you something to occupy yourself in the meantime, here’s a handy little test target. It’s useful for checking how closely a camera’s resolution approaches the spacing of its sensor pixels:

Resolution Limit Target

Test Target for Resolution Limit

Download the PDF file here, and print it out at full size. It’s a 600 ppi bitmap, and each tile should measure one inch square.

The numbers give the divisions per inch—that includes both the black and white stripes.

Now, photographed at a certain magnification, one spacing of stripes will exactly match the pixel pitch of the camera. Ideally then, zooming all the way into the image should show one row of black pixels, then a row of white pixels, etc.

Let’s say you have an 8 megapixel camera, which creates 3280 x 2460 pixel images. To match up the 40-lines/inch sample to your pixel spacing, divide 3280 by 40; you get 82 inches.

On a nearby wall, put up two marks 82″ apart (using tape, post-its, etc.). Set up the camera and zoom/move until the picture width exactly matches the marks. (Note that many viewfinders show slightly less than the true image size).

Tape up the test target near the center of the frame, where lens aberrations ought to be lowest. The setup looks something like this:

Resolution Test Setup

Field of Measured Width, Target

To make your test more reliable, be sure to:

  • use the highest JPEG quality setting
  • put the camera on a tripod
  • repeat the shot a few times, and select the sharpest
  • use the lowest available ISO setting
  • check that other settings (sharpening, etc.) are typical of your use

When you take a magnified look at the resulting image, you may see something resembling this:

Resolution Check Results

200% View of Resolution-Check Image

The 30-line sample looks a little lumpy; but it is actually resolved (you can count the correct number of lines). Not too surprisingly, the 50-line sample is not resolved—although you do get a hint of the line orientations.

But most interesting is the 40-line sample. Ideally, you’d see black & white lines exactly matching the pixel spacing. But because of the Bayer demosaicing, some decidedly funky things have happened. (Read a good explanation here about how demosaicing works.)

The target lines aren’t resolved—and they’ve actually created some false colors and textures.

Now, more natural, irregular subjects won’t show such alarming artifacts as this. And notice that the hairline around the number squares remains faintly visible, even though it’s much narrower than one pixel.

But down at the individual-pixel level, sensor resolution can become a bit shaky. Check for yourself and you might be surprised.

How Big Are My Pixels?

January 21, 2010

In the firestorm of comments about the Great Megapixel Swindle,  a couple of questions kept coming up: “Instead of megapixels, what should I be looking at? And how do I even know what chip size a camera has?”

Well, cameramakers designate chip sizes using an almost incomprehensible naming system (inherited from video tubes, if you really must know) using numbers like 1/2.3″. In fact, if we don our conspiracy tinfoil hats, it’s almost as if they’re deliberately making it hard to understand the true size.

Thankfully there’s a table decoding various sensor formats here. (Though that page has grown a little dated: There are no longer any 2/3″-sensor models on the market, sadly.)

But the number I really care about is, what is the size of the pixels? Yes, new technology might still come up with a few sensitivity-enhancing tweaks. But loosely speaking, the bigger each pixel is, the better.

Pixel width is quoted in microns—sometimes abbreviated µm or um. But it’s not often listed directly in camera specs.

But many in-depth review sites like DPRreview will give the “pixel density” in their model listings. Note the ridiculous jump from the consumer point & shoots (between 35 and 50 megapixels per square centimeter) versus the serious DSLRs (1.4 to 3.3).

Might this tell us something?

So as a handy conversion reference, here’s how to translate some of those density numbers into actual pixel sizes*

  • 50 Mp/sq. cm —> 1.4 micron pixels

[e.g. 14-megapixel compacts]

  • 35 Mp/sq. cm —> 1.7 micron pixels

[10-megapixel compacts]

  • 24 Mp/sq. cm —> 2.0 micron pixels

[“enthusiast” compacts, e.g Panasonic LX3]

  • 16 Mp/sq. cm —> 2.5 micron pixels

[Fujifilm F31fd, circa 2007]

  • 5 Mp/sq. cm —> 4.3 micron pixels

[New micro Four Thirds models]

  • 3.3 Mp/sq. cm —> 5.5 micron pixels

[typical APS-C sensor DSLR]

  • 1.4 Mp/sq. cm —> 8.5 micron pixels

[professional Nikon DSLR]

Now, for the reasons I’ve vented about before, the smallest pixel that makes any sense to me is about 2 microns across. Making cameras pocketable dictates smaller sensor sizes; but unless the chip is under 24 Mp/sq. cm, you’ll definitely compromise low-light capability.

But the real leap in quality comes when you drop to “single digits” in pixel density. Even compared to enthusiast compacts, those DSLR-style pixels have 5 to 8 times the light-gathering surface. That really makes a difference.

What this world needs badly is more little cameras with big pixels. I hope we get some soon.

*Note I’m really quoting the “pixel pitch.” Each actual pixel loses a bit of light-gathering area to its wiring traces. But microlenses overtop give nearly 100% coverage; and so they’re really the relevant width in terms of light-gathering area.

There is a great article at cambridgeincolour.com about the role diffraction plays in digital-camera resolution.

The issue is that at microscopic scales, the wavelike nature of light makes it act in a slightly “squishy” way. Points of light brought to focus by a lens are smeared out by a certain irreducible amount—even if all lens aberrations are perfectly corrected.

Instead of a sharp pinpoint, light is actually focused into a fuzzy bulls-eye pattern. Its bright center is named the Airy disk, after the British scientist who first described it.

Interestingly, the diameter of the Airy disk is unaffected by lens focal length, or image size; it depends only on the f/ratio of the lens. As you stop down the aperture, a bigger fraction of the light fans outwards from its intended path, and so the wider the Airy disk blur becomes.

With lens aberrations, the opposite is true: they create the most blur at widest apertures. On stopping down, sharpness improves.

So most film-camera lenses give their sharpest images at the middle of the f/stop range—the “sweet spot” where the combined effects from diffraction and lens aberrations are lowest. That’s one way to interpret the old photography rule, “f/8 and be there.”

Anyway, it’s easy to figure out the Airy disk size. The diameter in microns is about 1.35 times the f/number (using the green wavelength our eyes see most brightly). So, for example, the Airy disk at f/4 is 5.4 microns across.

The shocking thing few camera-buyers realize is that these fuzzy blobs are often larger than the individual pixels in a digital camera sensor.

Airy Disk versus Pixels

Pixels much smaller than the Airy Disk add no detail

The problem is most egregious in the world of point & shoots. Everyone seems to want the highest possible megapixels, in a camera the size of a deck of cards. There’s no way to do this without making each pixel extremely tiny. While the pixels in a good DSLR sensor might be 5 microns wide, the latest megapixel-mad point & shoots shrink each one to 1.5 microns or less. You start to see the problem.

We need to be a little careful about relating Airy disk size to pixel size, though. Sensor pixels have a Bayer pattern of color filters over them; and the final RGB image pixels are the result of a demosaicing algorithm. Also, every digital camera applies some amount of sharpening. This can, to some extent, counteract the diffraction blur.

But you can’t generate detail that was never recorded to begin with.

My assumption is simply that when the Airy disk fully covers four sensor pixels (as shown above), you have reached the point where diffraction makes additional pixels useless—no additional detail can be extracted. (This is a more generous criteria than many other folks’ reckoning.)

Let’s consider a typical point & shoot. Although its lens might open to f/2.8 at the widest zoom setting, at a “normal” focal length the maximum aperture is more like f/3.7. At this f/stop, the Airy disk is 5 microns across; it would fully illuminate four pixels of 1.7 micron width.

So how many megapixels could you get, if a single pixel is 1.7 microns?

Take a typical P&S chip size of 5.9 x 4.4 mm (a size better known by the cryptic designation 1/2.3″). At 3470 x 2603 pixels, you’d have a 9 megapixel camera.

Adding more pixels will not capture more detail. Neither will improved chip technology—we’ve hit a fundamental limit of optics.

Remember, this is all at the lens’s widest aperture (i.e., the one giving the poorest lens performance). As you stop down from there, the diffraction just gets worse.

Yet today’s models continue their mad race to ever-higher megapixel counts. Ten, twelve—now even 14 Mp are being sold.

This is where I start using the word “fraud.” Customers are being sold on these higher numbers with the implication it will make their photos better. This is simply a lie. All the higher megapixels deliver is needlessly bloated file sizes.

People forget that “full” HDTV is only 2 megapixels (1920 x 1080). Or that a 6 Mp camera can make a fine 8″ x 10″ print. A camera with 2 micron pixels is just about the limit, in allowing you to stop down the lens at all. That means staying under 7 Mp, given typical point & shoot chip dimensions.

And the more important point is this: Shoppers shouldn’t give their money to companies who lie to them.