If you put any faith in DxO Mark scores (and I do understand their limitations), today’s best $500 DSLRs make better images than all of the sub-$4000 cameras sold until 2008. Largely that’s thanks to some good recent APS-C sensors from Sony.

So I’m not sympathetic towards certain Pentax users, who bleat that they can’t make adequate photographs—not until the company delivers a DSLR with a 24×36 mm sensor. (With pre-Photokina Pentax rumors swirling, the issue is once again a hot topic for discussion.)

Full-Frame Pentax?

Full-Frame Pentax?

Yes, many long-time Pentaxians do have investments in full-frame “FA” lenses, ones that could cover the format.

But keep in mind, over the history of photography there have many camera mounts which died out entirely—obsoleting numerous fine lenses. How about Contax/Yashica mount? Minolta MD, or Canon FD? (Or even earlier systems, such as Exakta or the superb Zeiss Contarex?)

By contrast, any Pentax lens made since 1983 can at least be used on a modern DSLR—metering automatically, and with the benefit of built-in shake reduction. So the handwringing that some great lenses have been “orphaned” by Pentax can get a bit exaggerated.

Some point out hopefully that not long ago, Pentax introduced new telephoto lenses bearing the FA designation. Ergo, full-frame is coming!

But truthfully, no decent lens design even breaks a sweat in covering an image circle much smaller than its focal length. With a telephoto, full-frame coverage is basically “free”—so why not go ahead and use the FA labeling? It allowed Pentax to finesse the format question for a while longer; and maybe even sold a couple of extra lenses to Pentax film-body users.

I do agree with the complaint that the viewfinders of most consumer DSLRs are puny and unsatisfying. The view through the eyepiece of a full-frame body is noticeably more generous.

However, the electronic viewfinder of an Olympus E-P2 neatly solves this problem too—at a price savings of about $1400 over today’s cheapest full-frame DSLR. And the EVIL wave is only gathering momentum.

The perpetual cry from full-frame enthusiasts is that Moore’s Law will eventually slash 24x36mm sensor pricing. To me, this seems like a wishful misunderstanding of the facts.

The shrinking of circuit sizes which permits faster processors and denser memory chips is irrelevant to sensor design—the overall chip dimensions are fixed, and the circuitry features are already as small as they need to be.

Also, figuring the costs of CMOS chip production is not entirely straightforward. It costs money to research and develop significant new improvements over prior sensor generations; it’s expensive to create the photolithography masks for all the circuit layers. Then, remember that all these overhead costs must be recouped over only two, perhaps three years of sales. After that, your sensor will be eclipsed by later and even whizzier designs.

Thus, there is more to a sensor price than just the production-line cost; it also depends on the chip quantities sold. And full-frame DSLRs have never been huge sellers.

If APS-C sensors prove entirely satisfactory for 95% of typical photographs (counting both enthusiasts and professionals), a vicious circle results. With no mass-market camera using a full-frame sensor, volumes stay low, prices high. But with sensor prices high, it’s hard to create an appealing camera at a mass-market price point.

Furthermore, let’s consider the few users who would actually benefit from greater sensor area.

For the few who find 12 megapixels inadequate, nudging the number up to 24 Mp is not the dramatic difference you might imagine. To double your linear resolution, you must quadruple the number of pixels. Resolution-hounds would do better to look to medium format cameras, with sensors 40 Mp and up—which conveniently, seem to be dropping towards the $10,000 mark with the release of Pentax’s 645D.

The last full-frame holdouts are those who need extreme high-ISO potential. There’s no doubt that the 12Mp full-frame Nikon D3s makes an astounding showing here, with high-ISO performance that’s a solid 2 f/stops better than any APS-C camera. This is a legitimate advantage of class-leading 24×36 mm sensors.

Yet aside from bragging rights, we have to ask how many great photographs truly require ISO 6,400 and above. ISO 1600 already lets us shoot f/1.7 at 1/30th of a second under some pretty gloomy lighting—like the illumination a computer screen casts in a darkened room.

A day may come when sensor technology has fully matured, and every possible sensitivity tweak has been found. At that point, a particular sensor generation might hang around for a decade or more. So might those long production runs permit much lower unit costs for a full-frame sensor? There will still be a full-frame surcharge simply from the greater surface area and lower chip yields.

But who knows, perhaps it’s possible? By then we may have 40 Mp, 24×36 mm chips that are our new new “medium format.”

More Sensor Nonsense

March 23, 2010

I can’t think of any greater achievement in press-release puffery than having your claims uncritically repeated in the New York Times.

As many of you have heard, a company called InVisage has announced a “disruptive” improvement in imager sensitivity, through applying a film of quantum dot material. The story has been picked up by lots of photography blogs and websites, including DP Review.

But don’t throw your current camera in the trash quite yet. The Times quoted InVisage’s Jess Lee as saying, “we expect to start production 18 months from now”—with the first shipping sensors designed for phone-cam use.

InVisage Prototype

Sensitivity: Good. Hype: Bad

I have no way of knowing if InVisage’s claims will pan out in real-world products. But it’s interesting that a few people who work in the industry have skeptical things to say.

I do find it exaggerated to claim that the new technology is “four times” better than conventional sensors (95% versus 25% efficiency). Backside-illuminated sensors are shipping today which have much higher sensitivity; and refinements to microlenses and fill-factors are continuing.

However one true advantage of the quantum-dot film is that incoming photons slam to a stop within a very shallow layer (just half a micron thick). This is in contrast to conventional photodiodes, where longer-wavelength (redder) photons might need to travel through 6-8 microns of silicon before generating an electron.

That difference might enable sensors without microlenses to absorb light efficiently even from very oblique angles. It would permit lens designs with shorter back focus (as with rangefinder cameras); and thus we could get more compact cameras overall.

Kodak’s full-frame KAF-18500 CCD, used in the Leica M9, could only achieve the same trick by using special offset microlenses. (And if we are to believe this week’s DxO Mark report, that sensor may have compromised image quality in other ways.)

But I’m still slapping my head at the most ridiculous part of this whole story:

To give an example of what the “quantum film” technology would enable, Mr. Lee noted that we could have an iPhone camera with a 12-megapixel sensor.

Can I scream now? Is the highest goal of our civilization trying to cram more megapixels into a phone-cam? And WHY? But the über-iPhone is just nonsense for other reasons, even if we desired it.

As near as I can tell, an iPhone sensor is about 2.7 x 3.6 mm (a tiny camera module needs an extremely tiny chip). Space is so tight that a larger module would be unacceptable. So, to squish 12 Mp into the current sensor size, each pixel would need to be 0.9 microns wide!

An iPhone’s lens works at f/2.8. At that aperture, the size of the Airy disk (for green light) is 3.8 microns. This is the theoretical limit to sharpness, even if the lens is absolutely flawless and free from aberrations. At the micron scale, any image will be a mushy, diffraction-limited mess.

Also, remember that noise is inherent in the randomness of photon arrival. No matter what technology is used, teensy sensors will always struggle against noise. (The current “solution” is processing the image into painterly color smudges.)

And the dynamic range of a sensor is directly related to pixel size. Micron-scale pixels would certainly give blank, blown-out highlights at the drop of a hat.

But let’s be optimistic that, eventually, this technology will migrate to more sensible pixel sizes. Even if the sensitivity increase only turns out to be one f/stop or so, it would still be welcome. A boost like that could give a µ4/3-sized sensor very usable ISO-1600 performance.

But before we proclaim the revolution has arrived, we’ll need to wait for a few more answers about InVisage’s technology:

  • Is the quantum-dot film lightfast, or does it deteriorate over time?
  • Is there crosstalk/bleeding between adjacent pixels?
  • Is the sensitivity of the quantum-dot film flat across the visible spectrum, or significantly “peaked”? (This would affect color reproduction)
  • How reflective is the surface of the film? (Antireflection coatings could be used—but they affect sharpness and angular response)

So, stay tuned for more answers, maybe in the summer of 2011…

Making Peace With APS-C

March 19, 2010

It’s shocking to realize it was only six years ago when the first digital SLRs costing under $1000 arrived.

The original Canon Digital Rebel (300D) and Nikon D70 were the first to smash that psychologically-important price barrier. These 6-megapixel pioneers effectively launched the amateur DSLR market we know today.

Sub-$1000 DSLRs

New for 2004: DSLRs for Regular People

But one argument that raged at the time—and which has never completely gone away—was about the “crop format” APS-C sensor size. (The name derives from a doomed film system; but now just means a sensor of about 23 x 15 mm.)

Was APS-C just a temporary, transitional phase? Would DSLRs eventually “grow up,” and return to the 24 x 36 mm dimensions of 135 format?

This was a significant question, at a time when virtually all available lenses were holdovers from manufacturers’ 35mm film-camera systems.

The smaller APS-C sensor meant that every lens got a (frequently-unwanted) increase in its effective focal length. Furthermore, an APS-C sensor only uses about 65% of the film-lens image circle. Why pay extra for larger, heavier, costlier—and possibly dimmer—lenses than you need?

Also, after switching lens focal lengths to get an equivalent angle of view, the APS-C camera will give deeper depth of field at a given distance and aperture. (The difference is a bit over one f/stop’s worth.)

But a more basic question was simply: Do APS-C sensors compromise image quality?

Well, sensor technology (and camera design) have gone through a few generations since 2004.

Microlenses and “fill factors” improved, so that even higher-pixel-count sensors improved sensitivity. (For one illustration, notice how Kodak practically doubled the quantum efficiency of their medium-format sensors between a 2007 sensor and the current one selected for the Pentax 645D.)

Today’s class-leading APS-C models can shoot with decent quality even at ISO 1600. And the typical APS-C 12 megapixel resolution is quite sufficient for any reasonable print size; exceeding what most 35mm film shooters ever achieved.

So it’s clear that APS-C has indeed reached the point of sufficiency for a the great majority of amateur photographers.

Still, Canon and Nikon did eventually introduce DLSRs with “full frame” 24 x 36 mm sensors. They were followed by Sony, then Leica. In part, this reflects the needs of professional shooters, whose investments in full-frame lenses can be enormous.

And for a few rare users, files of over 20 megapixels are sometimes needed. In those cases, a full-frame sensor maintains the large pixel size essential for good performance in noise and dynamic range.

But these are not inexpensive cameras.

So the question has never completely gone away: Is APS-C “the answer” for consumer DSLRs? Or will full-frame sensors eventually trickle down to the affordable end of the market?

Canon Full-Frame Sensor

Canon Full-Frame Sensor

I have mentioned before an interesting Canon PDF Whitepaper which discusses the economics of manufacturing full-frame sensors (start reading at pg. 11).

It’s not simply the 2.5x larger area of the sensor to consider; it’s that unavoidable chip flaws drastically reduce yields as areas get larger. Canon’s discussion concludes,

[…] a full-frame sensor costs not three or four times, but ten, twenty or more times as much as an APS-C sensor.

However, since that 2006 document was written, I have been curious whether the situation has changed.

More recently I found a blog post from 2008 which sheds more light on the subject. (Chipworks is an interesting company, whose business is tearing down and minutely analyzing semiconductor products.)

Note that one cost challenge for full-frame sensors is the rarity of chip fabs who can produce masks of the needed size in one piece. “Stitching” together smaller masks adds to the complexity and cost of producing full-frame sensors—Chipworks was doubtful that the yield of usable full-frame parts was even 50%.

Thus, Chipwork’s best estimate was that a single full-frame sensor costs a camera manufacturer $300 to $400! (This compares to $70-80 for an APS-C sensor.)

And that’s the wholesale cost. What full-frame adds to the price the consumer pays must be higher still.

Thus, it seems unlikely the price of a full-frame DSLR will ever drop under $1000 (that magic number again)—at least, not anytime soon.

And actually, APS-C pretty darned decent.

So let’s embrace APS-C. That means it’s time to reject overweight, overpriced lenses that were designed for the older, larger format. We need to demand sensible, native lens options.

It would have been nice if APS-C had somehow acquired a snappier, less geeky name—maybe that’s still possible?

But it seems time to declare: It’s the standard.

Follow the Sensors…

March 15, 2010

After the release of the Pentax 645D, Kodak’s “PluggedIn” blog confirmed what many had suspected: Their KAF-40000 sensor was the one Pentax used to create the new camera.

Inside Kodak's Sensor Fab

Inside Kodak's Sensor Fab

Kodak has something of a tradition of announcing when their sensors are being used in high-profile cameras, such as Leica’s M8, M9, and S2—even giving the Kodak catalog numbers.

But their openness on this subject is a bit unusual.

As you may know, creating a modern semiconductor chip fab is staggeringly expensive—up to several billion dollars. So it’s understandable that behind the scenes, sensor chips are mainly manufactured by a few electronics giants.

And selling those chips has become a cut-throat, commodity business; so camera makers sometimes obtain sensors from surprising sources.

But it’s hard to trumpet your own brand superiority, while admitting your camera’s vital guts were built by some competitor. So many of these relationships are not public knowledge.

But if we pay careful attention… We might be able to make some interesting deductions!

Panasonic Sensors

Panasonic Image Sensors

Some of the big names in CMOS image sensors (“CIS” in industry jargon) are familiar brands like Sony, Samsung, and Canon. But cell-phone modules and other utilitarian applications lead the overall sales numbers; and in this market, the leader is a company called Aptina.

No, I didn’t know the name either. But that’s not surprising, since they were only recently spun off from Micron Technology.

Yes, that’s the same Micron who makes computer memory. As it turns out, many of the fab techniques used to produce DRAM apply directly to CMOS sensor manufacture.

Another of the world’s powerhouses in semiconductor manufacturing is Samsung. And it’s widely known that Samsung made the imaging chips used in many Pentax DSLRs. (It would have been hard to keep their relationship a secret: Samsung’s own DSLRs were identical twins of Pentax’s.)

Samsung currently builds a 14.6 megapixel APS-C chip, called the S5K1N1F. Not only was this used in Pentax’s K-7, but also in Samsung’s GX-20. And it’s assumed that Samsung’s new mirrorless NX10 uses essentially the same sensor.

Panasonic’s semiconductor division does offer some smaller CCD sensors for sale, up to 12 megapixels. But with MOS sensors, it is only their partner Olympus who gets to share the Four-Thirds format, 12 Mp “Live MOS” chip used in the Lumix G series.

Meanwhile, it remains mystifying to me that the apparently significant refinements in the GH1 sensor don’t seem to have benefited any other Four Thirds camera yet. (Why?)

As I discussed last week, Sony’s APS-C chips apparently make their way into several Nikon models, as well as the Pentax K-x, the Ricoh GXR A12 modules, and probably the Leica X1.

But Sony has also brought out a new chip for “serious” compact cameras—intended to offer better low-light sensitivity despite its 2 micron pixels. It’s the 1/1.7 format, 10 Mp model ICX685CQZ. You can download a PDF with its specs here.

On the second page of that PDF, note the “number of recommended recording pixels” is 3648 × 2736.

Isn’t it rather remarkable that the top resolution of the Canon G11, and also their S90, matches this to the exact pixel? And how about the Ricoh GRD III, too?

Crypto Sony

Crypto-Sony?

And even Samsung’s newly-announced TL500—it’s the same 3648 x 2736!

None of these cameras provide 720p video—an omission that many reviewers have gnashed their teeth about. However you’ll note in the Sony specs that 720p at 30 frames/sec is not supported by that sensor.

Sony has also made a splash with a smaller 10 Mp sensor, using backside-illumination to compensate for the reduced pixel area. This is the IMX050CQK (with a specs PDF here).

In Table 3 of that PDF, again notice the 3648 x 2736 “recommended” number of pixels. And sure enough, this matches the first Sony cameras released using the chip, the WX1 and TX1.

But if you start poking around, the Ricoh CX3 is another recent camera boasting a backside-illuminated, 10 Mp sensor, which has the same top resolution.

So is the FujiFilm HS10 & HS11. And the Nikon P100.

Now, it seems startling that even Canon and Samsung (who surely can manufacture their own chips) might go to Sony as an outside supplier.

But when you compare CMOS sensors to older CCD technology, their economics are slightly different. CMOS requires much more investment up front, for designing more complex on-chip circuitry, and creating all the layer masks. After that though, the price to produce each part is lower.

After creating a good CMOS chip, there is a strong incentive to sell the heck out of it, to ramp up volumes. Even to a competitor. So we may see more of this backroom horse-trading in sensor chips as time goes on.

In fact, Kodak’s point & shoots introduced in the summer of 2009 actually didn’t use Kodak sensor chips.

But that’s something they didn’t blog about.

Cycles Per Degree?

February 26, 2010

In reading about visual acuity recently, I noticed that different sources state different figures as the limit of detail that our eyes can resolve.

We’re talking here about the finest line spacings we can see in the center of our vision—the retina’s “fovea”—where our cone cells are most tightly packed. That’s reasonable, since our eyes are constantly scanning across anything we see, noticing significant details, to build up a complete impression.

The acuity unit often mentioned is “cycles per degree.” Each “cycle” is the same as a “line pair”—namely, one black/white pair taken together. You may find human acuity limits quoted as anywhere from 40 to 50 cycles per degree. (But 20/20 vision only corresponds to 30 cycles per degree.)

One reason for this uncertainty is simply that the human contrast sensitivity function means that finer spacings are perceived with much lower clarity. So the limit is not black and white; rather it is literally “in a gray area.”

But if you’re curious, it’s pretty simple to test yourself.

Cycles Per Degree Test

These Precision Optical Instruments Are Used

All you really need is a nice long tape measure (50 feet is probably enough).

Draw two heavy lines with a Sharpie, with a gap between them the same width as the lines. Tack the paper to a wall, and hook your tape measure onto a convenient nearby door frame, etc.

Then start walking backwards. At some point you’ll find it becomes very difficult—then impossible—to see the white gap between the lines. Write down your tape-measure distance from the target.

Next, you need to measure the width of your “cycle” (one black line and the white gap).  Convert this width into the same units as your tape-measure distance (feet, inches, meters, etc.).

In my case, I’d measured 36 feet on the tape measure; and my “cycle” was 0.0139 feet wide.

Divide the cycle width by the tape-measure distance, and you’ll get some tiny number. Now, to convert this to degrees, you need to take the “inverse tangent” (if you’ve lost your scientific calculator, try the bottom section of this online one).

That gives you the degrees per cycle. To get cycles per degree, divide one by that number.

I didn’t estimate any numbers beforehand; so I was pleasantly surprised that my measured distance translated perfectly into 45 cycles per degree. That was good news both about my eyesight (and my eyeglass prescription), as well as perfectly splitting the range of acuity numbers I’d seen quoted.

But note that I did this test outdoors, on an overcast day. Moderately bright illumination like this gives the maximum visual acuity.

So I did a retest indoors, at an illumination level that might be more relevant to typical print-viewing conditions. Here, the lighting was 7 stops dimmer (1/128th as bright).

Perhaps not surprisingly, it became harder to judge the cut-off point where the white gap disappeared. But it definitely happened at a closer distance—roughly 28 feet.

Crunching those numbers, my indoor acuity dropped to 35 cycles per degree. This illumination level is probably more representative of the conditions used in eye-doctor tests; so being in the ballpark of 30 cycles per degree (20/20 vision) seems pretty plausible.

Now remember from my earlier discussion that detectable detail doesn’t equate very well with subjectively significant detail.

But, sitting typing this, I have unconsciously adjusted my distance from my computer screen so that its height occupies about 32° of my field of vision.

If you think that’s a reasonable distance to view a photo print from, you can do a little math. Even losing some resolution to Bayer demosaicing, a digital camera of 12.5 megapixels would capture all the detail my acuity could discern at all.

What is the Frequency?

February 23, 2010

Last week’s post, comparing excessive bits in music recording with megapixel overkill in cameras, drew more comments than usual.

So today, I will recklessly continue my strained analogy between audio and optics. I hope you’ll bear with me.

Anyone who has researched stereo gear or sound-recording equipment will have come across graphs of frequency response, like this one:

Shure SM-58 Microphone Response Curve

Frequency Response Curve for a Microphone

The horizontal axis shows the pitch of the sound, increasing from low notes on the left to high ones on the right. The wavy curve shows how the output signal rises and falls with these different sound frequencies.

In a mic like this, air pressure needs to to physically shove a diaphragm and a coil of wire back and forth. When you get to the right end of the graph, with air vibrating 20,000 times per second, mechanical motion has trouble keeping up; the response curve nosedives.

Now, other brands of microphones exist where the frequency response is almost ruler-flat across the spectrum. So is this just a poor one?

In fact, you have to look at this response curve in the context of human hearing ability. As it happens, our ears don’t have a very flat frequency response either:

Equal Loudness Curves

Our Ears Have a Sensitivity Peak

Note that the orientation of this graph is “upside down” compared to the first one. These curves show that our hearing is most sensitive to pitches around 3 or 4 kHz; tones of other frequencies must be cranked up higher to have subjectively equal loudness.

(Fletcher & Munson from Bell Labs published the first comprehensive measurements of this effect in 1933. So you’ll still hear mention of “Fletcher-Munson curves.” But the measurements have been redone several times since. Click the graph for a PDF of a recent, and hopefully definitive study.)

The upshot is, the rolled-off ends of this microphone’s frequency response may not be that obvious to our ears. And actually, the SM58 has become the standard stage mic for vocalists. Its ability to tolerate nightly abuse in a bar room or on a music tour offers a real-world advantage outweighing any lack of extended treble.

If you’ve ever seen live music, you’ve undoubtedly heard vocals through an SM58. Did the roll-off above 10 kHz change how you felt about the singer’s performance? I doubt it.

Shure SM58 Side

You Recognize This

Last week I made a post about lens sharpness, referring to MTF graphs. While the little test target I posted on this blog has solid black & white bars (which was easiest for me to create), formal MTF testing presents a lens with stripes varying in brightness according to a sine function—just like audio test tones:

MTF Sine Target

Increasing Spatial Frequencies

The brightness difference between the white and black stripes at a very low frequency defines “100% contrast.” Then, as you photograph more and more closely-spaced lines, you measure how the contrast ratio drops off, due to diffraction and aberrations.

Ultimately, at some frequency, the contrast percentage plummets into the single digits. At that point details become effectively undetectable.

In fact you can draw a “spatial frequency response” curve for a lens—very much like our audio frequency response graph for the microphone:

Spatial Frequency Response Graphs

Lens Spatial Frequency Response Curves

(This chart comes from a deeply techie PDF from Carl Zeiss, authored by Dr. Hubert Nasse. It discusses understanding and interpreting MTF in great detail, if you’re curious.)

Unlike the earlier MTF graph I showed, here the horizontal axis doesn’t show distance from the center of the frame. Instead, it graphs increasing spatial frequencies (that is, stripes in the target getting more closely spaced) at a single location in the image.

The lower dotted curve shows how lens aberrations at f/2 cause contrast to drop off rapidly for fine details. The heavy solid line shows how contrast at f/16 would be limited even for a flawless lens, simply due to diffraction.

The lens performance at f/5.6 is much better. It approaches, but does not quite reach, the diffraction limit for that aperture. Results like this are representative of many real lenses.

Now, our natural assumption is that greater lens sharpness is always useful. But as I mentioned in my earlier post, our eyes have flawed lenses too (squishy, organic ones), limited by diffraction and aberrations. They have their own MTF curve, which also falls off as details become more closely spaced.

Might we say that our visual acuity has its own “Fletcher-Munson curves”?

To answer that I’m going to ask you to open a new tab, for this link at Imatest (a company who writes optical-testing software). This is their discussion of the human “Contrast Sensitivity Function.”

The top graph shows the approximate spatial frequency response for human vision.  It shows that subjectively, we are most sensitized to details at a spacing of about 6–8 cycles per degree of our visual field.

Even more startling is the gray test image below. Notice how the darker stripes seem to fill a “U” shape?

In fact, at any given vertical height across that figure, every stripe has the same contrast. You are seeing your own spatial sensitivity function at work. It’s the middle spacings that we perceive most clearly.

You can have a bit of fun moving your head closer and further from your computer screen (changing the cycles/degree in your vision). Notice how your greatest sensitivity to the contrast moves from the left to the center of the image? (Ignore the right edge, where the target starts to alias against our computer-screen pixel spacing.)

It might surprise you that there is a low-frequency roll-off to our vision’s contrast detection. After all, imagine filling your field of vision with ONE giant white stripe, and one black one—surely you’d notice that?

But, MTF measurements don’t use stripes with hard edges. Instead MTF uses smoothly-varying sinusoidal brightness changes, as I showed above.

Our eyes are great at finding edges. But smooth gradients over a large fraction of your visual field are much less obvious. Our retinas locally adjust their sensitivity—which is why we get afterimages from camera flashes or from staring at one thing for too long—and tend to subtract out those large-scale patterns.

A Contrast Sensitivity Function

Matching a Sensitivity Curve to Test-Subject Data (Red Dots)

Defining the whole human contrast sensitivity function precisely can get extremely complex.  However we can make a few points:

The ultimate limit for human acuity might be at 45 or 50 cycles per degree (sources differ). But that spacing is right at the edge of perceivability.

It’s only under bright light that we achieve our greatest acuity; but we generally view photographs under dimmer indoor illumination. There, even spacings of 20 cycles per degree might show seriously degraded contrast.

In a second PDF, Dr. Nasse discusses a “Subjective Quality Factor” for lenses (see page 8). This is an empirical figure of merit, which integrates lens MTF over the spatial frequencies that our eyes find most significant. The convention is to use 3 to 12 cycles per degree—near our vision’s peak sensitivity.

My prior post on MTF mentioned that 10 and 30 lp/mm (on a full 24 x 36mm frame) were lens-test spacings chosen for their relevance to human vision. Actually that oversimplified slightly. We didn’t specify how the photographs would be viewed.

In fact, those criteria correspond to “closely examining” an image. If you put your eye very near to a print (a distance half the image diagonal) the photo practically fills your view. It’s about the limit before viewing becomes uncomfortable. In those conditions, the 10 and 30 lp/mm of lens tests translate to 4 and 12 cycles per degree of your vision.

The conditions of “regular” viewing (at a distance maybe twice the image diagonal) relax the detail requirements significantly.

So how much detail do our photos really, truly need?

Ultimately, that’s a personal question, which photographers must answer for themselves. It depends how large you print, and how obsessively you examine.

But the eye’s own limited “frequency response” suggests we may not always need to worry about it.

Something About Sharpness

February 19, 2010

Once upon a time, the way photo-geeks evaluated lens quality was in terms of pure resolution. What was the finest line spacing which a lens could make detectable at all?

Excellent lenses are able to resolve spacings in excess of 100 lines per millimeter at the image plane.

But unfortunately, this measure didn’t correlate very well with our perception looking at real photos of how crisp or “snappy” a lens looked.

The problem is that our eyes themselves are a flawed optical system. We can do tests to determine the minimum spacing between details that it’s possible for our eyes to discern. But as those details become more and more finely spaced, they become less clear, less obvious—even when theoretically detectable.

The aspects of sharpness which are subjectively most apparent actually happen at slightly larger scale than you’d expect, give the eye’s pure resolution limit.

This is the reason why most lens testing has turned to a more relevant—but unfortunately much less intuitive—way to quantify sharpness, namely MTF at specified frequencies.

MTF Chart

Thinner Curves: Finer Details

An MTF graph for a typical lens shows contrast on the vertical axis, and distance from the center of the frame on the horizontal one. The black curves represent the lens with its aperture wide open. Color means the lens has been stopped down to minimize aberrations, usually to f/8 or so. (I’ll leave it to Luminous Landscape to explain the dashed/solid line distinction.)

For the moment, all I want to point out is that that there’s a thicker set of curves and a thinner set.

The thinner curves show the amount of contrast the lens retains at a very fine subject line spacings. The thicker ones represent the contrast at a somewhat coarser line spacing (That’s mnemonically helpful, at least.)

The thick curves correspond well to our subjective sense of the “snap,” or overall contrast that a lens gives. Good lenses can retain most of the original subject contrast right across the frame. Here, this lens is managing almost 80% contrast over a large fraction of the field, even wide open. Very respectable.

The thin curves correspond to a much finer scale—i.e. in your photo subject, can you read tiny lettering, or detect subtle textures?

You can see that preserving contrast at this scale becomes more challenging for an optical design. Wide open, this lens is giving only 50 or 60% of the original subject contrast. After stopping down (thin blue curves), the contrast improves significantly.

When lenses are designed for the full 35mm frame (as this one was) it’s typical to use a spacing of 30 line-pairs per millimeter to draw this “detail” MTF curve.

And having the industry choose this convention wasn’t entirely arbitrary. It’s the scale of fine resolution that seems most visually significant to our eyes.

So if that’s true… let’s consider this number, 30 lp/mm, and see where it takes us.

A full-frame sensor (or 35mm film frame) is 24mm high. So, a 30 lp/mm level of detail corresponds to 720 lines over the entire frame height.

The number “720” might jog some HDTV associations here. Remember the dispute about whether people can see a difference between 720 and 1080 TV resolutions, when they’re at a sensible viewing distance? (“Jude’s Law,” that we’re comfortable viewing from a distance twice the image diagonal, might be a plausible assumption for photographic prints as well.)

But keep in mind that a 30 line pairs/mm (or cycles/mm in some references) means that you have a black stripe and a white stripe per pair. So if a digital camera sensor is going to resolve those 720 lines, it must have a minimum of  1440 pixels in height (at the Nyquist limit).

In practice, Bayer demosaicing degrades the sensor resolution a bit from the Nyquist limit. (You can see that clearly my test-chart post, where “40” is at Nyquist).

So we would probably need an extra 1/3 more pixels to get clean resolution: 1920 pixels high, then.

In a 3:2 format, 1920 pixels high would make the width of the sensor 2880 pixels wide. Do you see where this is going?

Multiply those two numbers and you get roughly 5.5 megapixels.

Now, please understand: I am not saying there is NO useful or perceivable detail beyond this scale. I am saying that 5 or 6 Mp captures a substantial fraction of the visually relevant detail.

There are certainly subjects, and styles of photography, where finer detail than this is essential to convey the artistic intention. Anselesque landscapes are one obvious example. You might actually press your nose against an exhibition-sized print in that case.

But if you want to make a substantial resolution improvement—for example, capturing what a lens can resolve at the 60 lp/mm level—remember that you must quadruple, not double the pixel count.

And that tends to cost a bit of money.

Follow

Get every new post delivered to your Inbox.