Something About Sharpness

February 19, 2010

Once upon a time, the way photo-geeks evaluated lens quality was in terms of pure resolution. What was the finest line spacing which a lens could make detectable at all?

Excellent lenses are able to resolve spacings in excess of 100 lines per millimeter at the image plane.

But unfortunately, this measure didn’t correlate very well with our perception looking at real photos of how crisp or “snappy” a lens looked.

The problem is that our eyes themselves are a flawed optical system. We can do tests to determine the minimum spacing between details that it’s possible for our eyes to discern. But as those details become more and more finely spaced, they become less clear, less obvious—even when theoretically detectable.

The aspects of sharpness which are subjectively most apparent actually happen at slightly larger scale than you’d expect, give the eye’s pure resolution limit.

This is the reason why most lens testing has turned to a more relevant—but unfortunately much less intuitive—way to quantify sharpness, namely MTF at specified frequencies.

MTF Chart

Thinner Curves: Finer Details

An MTF graph for a typical lens shows contrast on the vertical axis, and distance from the center of the frame on the horizontal one. The black curves represent the lens with its aperture wide open. Color means the lens has been stopped down to minimize aberrations, usually to f/8 or so. (I’ll leave it to Luminous Landscape to explain the dashed/solid line distinction.)

For the moment, all I want to point out is that that there’s a thicker set of curves and a thinner set.

The thinner curves show the amount of contrast the lens retains at a very fine subject line spacings. The thicker ones represent the contrast at a somewhat coarser line spacing (That’s mnemonically helpful, at least.)

The thick curves correspond well to our subjective sense of the “snap,” or overall contrast that a lens gives. Good lenses can retain most of the original subject contrast right across the frame. Here, this lens is managing almost 80% contrast over a large fraction of the field, even wide open. Very respectable.

The thin curves correspond to a much finer scale—i.e. in your photo subject, can you read tiny lettering, or detect subtle textures?

You can see that preserving contrast at this scale becomes more challenging for an optical design. Wide open, this lens is giving only 50 or 60% of the original subject contrast. After stopping down (thin blue curves), the contrast improves significantly.

When lenses are designed for the full 35mm frame (as this one was) it’s typical to use a spacing of 30 line-pairs per millimeter to draw this “detail” MTF curve.

And having the industry choose this convention wasn’t entirely arbitrary. It’s the scale of fine resolution that seems most visually significant to our eyes.

So if that’s true… let’s consider this number, 30 lp/mm, and see where it takes us.

A full-frame sensor (or 35mm film frame) is 24mm high. So, a 30 lp/mm level of detail corresponds to 720 lines over the entire frame height.

The number “720” might jog some HDTV associations here. Remember the dispute about whether people can see a difference between 720 and 1080 TV resolutions, when they’re at a sensible viewing distance? (“Jude’s Law,” that we’re comfortable viewing from a distance twice the image diagonal, might be a plausible assumption for photographic prints as well.)

But keep in mind that a 30 line pairs/mm (or cycles/mm in some references) means that you have a black stripe and a white stripe per pair. So if a digital camera sensor is going to resolve those 720 lines, it must have a minimum of  1440 pixels in height (at the Nyquist limit).

In practice, Bayer demosaicing degrades the sensor resolution a bit from the Nyquist limit. (You can see that clearly my test-chart post, where “40” is at Nyquist).

So we would probably need an extra 1/3 more pixels to get clean resolution: 1920 pixels high, then.

In a 3:2 format, 1920 pixels high would make the width of the sensor 2880 pixels wide. Do you see where this is going?

Multiply those two numbers and you get roughly 5.5 megapixels.

Now, please understand: I am not saying there is NO useful or perceivable detail beyond this scale. I am saying that 5 or 6 Mp captures a substantial fraction of the visually relevant detail.

There are certainly subjects, and styles of photography, where finer detail than this is essential to convey the artistic intention. Anselesque landscapes are one obvious example. You might actually press your nose against an exhibition-sized print in that case.

But if you want to make a substantial resolution improvement—for example, capturing what a lens can resolve at the 60 lp/mm level—remember that you must quadruple, not double the pixel count.

And that tends to cost a bit of money.


4 Responses to “Something About Sharpness”

  1. rj74210 Says:

    Very compelling arguments, as usual. But one thing is not clear to me: you speak of b/w line pairs, fine, but how does this relate to pixels ? It seems to me that you need four pixels (sensor thingies) two green, one red, one blue, to resolve the width or height of one line (be it white or black). Doesn’t this mean that we in fact need a lot more rgb sensor thingies to achieve equivalent line/pair resolving power ? (the Foveon sensor excepted, of course, since it stacks all three color sensors vertically in one space)

    • petavoxel Says:

      If you scroll down a bit at this link, you’ll see why after Bayer demosaicing you do end up with approximately a 1:1 relationship between sensor photosites and RGB pixels in the photo image.

      The luminance information is more or less real. The color locations do have to be interpolated, and thus they have lower spatial resolution.

      But it has been found in a lot of contexts (from television to astrophotography) that our eyes aren’t very sensitive to this lower color resolution. Actually, lossy compression that takes advantage of this is the cornerstone of all our current video delivery formats.

  2. petavoxel Says:

    A footnote: After finishing this post, I came across two startlingly dense discussions of MTF, from Zeiss optician Hubert Nasse, here and here.

    It will be a while before I can digest them fully, beyond just saying “MTF gets real complicated.” But if you’d be interested in a more technical discussion, please take a look.

    His articles did point out to me that a camera adds its own MTF to that of the lens (due to the sensor’s antialias filter, or any sharpening applied in-camera).

    When you approach Nyquist, the AA filter’s degradation of contrast must be factored into the combined system MTF. So ultimately, he concludes that on a full-frame sensor, 24 Mp can improve detail (due to the weaker AA filter needed) even when the lens is challenged to resolve at that scale.

  3. […] week I made a post about lens sharpness, referring to MTF graphs. While the little test target I posted on this blog has solid black & white bars (which was […]

Comments are closed.

%d bloggers like this: