February 19, 2010
The stats show that my post, “the Great Megapixel Swindle,” continues to get quite a lot of traffic. That’s a little unnerving, given that it was written as a quick, off-the-cuff tantrum. If I’d known how many folks would read it, I would have said several things more precisely.
Around the internet, the “Swindle” spawned many, many discussion threads—I can’t keep track of them all. I did try to respond to many of the questions and misunderstandings that I was seeing, in a followup post here. And I’ve expanded on the same issues in many other posts as well.
Today I’ve noticed a thread over at Rangefinder Forum which raises the question, “isn’t it unfair to use a crop from the background? Naturally that looks bad, since it’s out of focus.”
First, the real point this example makes is this: Cameras with tiny pixels must use aggressive post-processing to reduce noise; and this can cause strange, unnatural-looking artifacts. (There’s more on the subject here.)
But on the question of depth of field, I should clarify a bit.
The EXIF data for this shot shows the Olympus FE-26 was set at the wide end of its zoom range—namely 6.3mm. Such a short focal length implies extreme depth of field. The f/stop used was f/4.6.
The H. Lee White is a 700-foot-long Great Lakes freighter. I’m not exactly sure how far the camera was from the subject; but it’s doubtful that it was closer than 15 feet.
Now, we can’t blindly apply standard depth-of-field tables here. The standard calculation (e.g. if you scroll down to Olympus FE-26 here) uses a circle of confusion of 0.005 mm for this sensor format.
But when you are looking at such an extreme enlargement, the assumptions behind that break down (CoC is generally referenced to viewing an 8×10 print at a moderate distance).
But consider that this camera has a sensor size of about 6 x 4.5mm, and so each individual pixel is 1.53 microns wide. In other words, 0.00153 mm.
Clearly, the meaningful circle of confusion can’t be smaller than one pixel. Given the resolution loss that happens with Bayer interpolation, 0.002 mm seems a realistic CoC.
So the out-of-focus blur is actually negligible compared to the pixel size in this case.
You can use an alternate depth of field calculator which lets you input arbitrary values if you’d like to explore this further yourself.
February 16, 2010
Today, some musings that are a bit more (har har) abstract.
Sometimes we become numbed with the perpetual escalation of tech specs. The mindset of the computer industry, where each new generation promises more, faster & bigger, seems to be the new normal.
And it can become a self-fulfilling prophecy. If it is technically possible to ratchet up some specs number, inevitably we’ll choose to. That’s what keeps people buying!
But there’s one nice example of an industry that dangled ever-increasing numbers in front of consumers—who then yawned and said “no thanks.”
How many DVD-Audio or “Super Audio CD” titles do you own? (Okay, I’m sure someone out there is an enthusiastic adopter. But I mean, the average person.)
The original standard for music CDs uses 44,100 samples per second. Each sound sample has 16 bits; meaning it encodes the full range from soft to loud with about 65,000 discrete levels.
The CD standard was adopted around 1980; so at that time there was pressure to keep the bit rate low enough so that the player electronics would not be prohibitively expensive.
No one knew how data-handling ability would explode over the following decades. When you see a speed “60x” or “133x” on a camera memory card, those are referenced against the original CD-player (or, 1x CD-ROM) bit rate.
So in the audiophile world there were grumbles from the start that the CD bit rate was insufficient.
With only 16 bits, the quietest elements of music (like note decays and room ambience) must be recorded with somewhat coarse resolution. It’s similar to how shadow areas in digital photos can look noisier than highlights. A standard that used 20 bits or so would have preserved more fine “texture” in low-amplitude sounds.
And before digitizing sound for CD, any frequency of 22,050 cycles/second or higher must get chopped off. A high-pitched rising chirp that went past that frequency would make the A/D converter miss the true peaks and troughs of the wave; it would misrepresent it as a falling note, back down in the audible spectrum.
For CD audio, 22 kHz is the “Nyquist Limit,” and you need an “antialias filter” to block higher frequencies (yes, these exact same principles apply to digital-camera sensors). But there were complaints that the antialias filters degraded sound quality.
And while most adults can’t hear steady pitches much above 16 kHz, very brief clicks at a higher theoretical frequency might contribute some “edge” to percussion and note attacks. A lot of professionals preferred to record at 48 kHz instead. (You can go higher today, though bandwidth limitations in the analog realm become significant.)
Folks who produce music may have reasons for 24-bit sampling (tracks are often put through computation-intensive effects; you don’t want rounding errors), but 20-bit delivery covers an excellent dynamic range. Even taking a 20-bit master and dithering it down to a 16-bit release version can work well.
I apologize to all of you whose eyes glazed over during those past few paragraphs. The truth is, most of us found CD sound quality perfectly adequate.
If you do the math, the bit rate used for CD sound is about 1.4 Mbit/sec (in stereo). A reasonable standard that would have handled any outstanding quality issues might have been 20 bits at 48 kHz. That works out to 1.9 Mbit/sec.
But the arrival of DVD technology offered a huge increase in disc capacity. It was a great opportunity to sell a newer, zingier, whizzier-spec music format too.
So a “format war” broke out, but based on bit rates that were sort of crazy. Stereo in Sony’s SACD standard burns up 5.6 Mbit/sec—quadrupling the CD rate. DVD-A is a “family” of standards (a standard that doesn’t standardize); but its highest supported stereo rate is 9.2 Mbits/sec!
A leading writer in the digital-audio field once told me in an email that the reasons for these bit rates had more to do with “quieting the lunatic fringe” than with any technical justification.
But the public treated these new audio formats with indifference. SACD has found a niche in classical music, but most folks are completely unaware of it. (Even though, soon enough, millions did go out to buy a new disc format: Blu-ray.)
You all know what actually happened with music: Buying it online, and being able to take it everywhere in your pocket, totally changed the game.
So with downloadable music, the bit rate actually plunged instead. Today we’re buying music that uses 1/5th or even 1/8th the old CD standard! (Psychoacoustically intelligent data compression makes it possible.)
So… does this have anything to do with photography?
Camera manufacturers today (even those making enthusiast models) continue to use megapixels as the spec that defines “improvement.” Every year, more bits!
This shows a depressing lack of creativity. Past the point where this offers any real value, it’s just mindlessly chasing a number.
What we need is a serious rethink—to create something so novel and desirable that any talk about pixel count become irrelevant.
My feeling is that a game-changer for digital cameras is radical improvements in low-light capability. (This parallels the opinions of that recent Gizmodo article.)
We’ve suffered through decades of terrible point & shoots—whose slow zooms and limited sensitivity demanded nuclear-blast electronic flash for every indoor shot.
Flash is blinding, conspicuous, annoying to bystanders, and quite rightly prohibited at most museums and concerts. It drags out the lag time before shots.
It’s also a form of lighting which makes people look like shit.
Consider the fraction of our days we spend indoors, often under marginal illumination. But living rooms, restaurants, etc.—isn’t this where our real lives happen? Wouldn’t it be amazing to record those moments realistically, accurately, but without blinding and ugly flash?
What if you had a camera that could shoot at ISO 1600, cleanly? What about a camera where anti-shake let you trust shooting at 1/15th sec.? Plus a lens of f/1.7 or f/1.4—scooping up four times as much light?
Then, people could take photos freaking anywhere. Without flash. The light of a single candle is enough! (That’s LV 2, if you’re wondering.)
Now, to get a lens that fast, we’d probably need to lose the zoom.
Oh noes! Cameramakers’ second most-flogged spec number is zoom range. Our 12x is better than their 10x! I can hear the howls already: “Who would buy a camera without a zoom?”
Well, how many people use cell phones as their main camera today? Those have no optical zoom.
Some used to ask, “who would pay for a compressed MP3 when you can own the real physical disc?”
People who appreciate convenience. People who want technology to fit into their real lives. Us.
February 12, 2010
January and February are months when the air hangs thick with new-camera introductions.
We’re also approaching this country’s wildest Lost Weekend of photo-equipment marketing, PMA 2010, which starts February 19th.
So, it’s the right moment to look our camera industry straight in the bleary eye, and ask the hard question. Are you on drugs?
Regular readers of this blog know my arguments well: Overdosing cameras with millions of teensy pixels is risky behavior—in fact, irrational and damaging.
Not unlike drug abuse.
But to survey the full breadth of this scourge, I’ve needed to pore over DP Review’s specs listings—noting the pixel density of every model introduced since January, 2008. (There were almost 400 in total.)
No doubt I’ve missed some models somewhere, or copied some numbers wrong. But I’ve made a sincere attempt to find out: Which brand has the worst megapixel-monkey on its back?
Here’s how it works:
- Camera models of 35 Mp/sq. cm or more (but less than 40) earn one crack pipe
- Models having 40 Mp/sq. cm or above (but below 45) get two crack pipes
- Any model with 45 Mp/sq. cm or “higher” is awarded the unprecedented three crack pipes.
But a camera maker can redeem themselves, somewhat. All I want to see is evidence they’re entering rehab and doing community service.
- Any model with a pixel density of 5 to 15 Mp/sq. cm subtracts one crack pipe
- A camera having 2.5 to 5 Mp/sq. cm knocks off two pipes
- A model of less than 2.5 Mp/sq. cm expunges three whole pipes.
[Note: Currently, the last category only includes Nikon’s high-end D700 and D3s. The Micro Four Thirds models included are all a whisker over 5 Mp/sq. cm.]
First, we must single out Sigma—Boy Scout among camera makers.
Better known for their lenses, they have a small lineup of cameras using the Foveon sensor, which is 20.7 x 13.8mm.
This makes them the only current camera manufacturer to be 100% CRACK FREE. We may find their products a bit geeky and lacking in social graces, but at least they’re leading the clean life.
But for the others, it’s a grim tragedy. In reverse order of crackheadedness, here is the ranking:
- Ricoh: 2 crack pipes
- Pentax: 12 crack pipes
- Tied, with 24 crack pipes each: Sony and Kodak
I must interrupt here to mention that Sony’s crack score should have been 20 points higher—except that, like an agitated street person muttering “I’m getting my life together!”, Sony somehow introduced eleven different DSLR models in the past two years.
But I’m going to let that slide. Sony has at least admitted a problem. Their new (and unproven) detox plan involves a medication named “Exmor R,” and a risky procedure called “back illumination.”
Sadly, we all know how fragile recovery can be.
- Okay. Back to Nikon: 25 crack pipes
- Fujifilm: 31
- Casio: 34
- Canon: 37
(Canon does earn a special “we’re getting help” mention—for tapering off their S90 and G11 models to a slightly lower dose of 10 mg. Er… Mp.)
- Samsung: 39
Samsung! Snap out of it! There’s still time to go home to your family, bringing more NX-mount cameras. I am speaking to you as a concerned friend.
And finally—we get to the two saddest cases in the whole megapixel ward.
Like many addicts, they always seemed able to hold it together in public. But the numbers don’t lie.
- Yes, Olympus: 54 crack pipes
…and perhaps most shocking,
- Panasonic: 67 crack pipes
Tell me it’s not true. The two leaders of Micro Four Thirds? The upstanding citizens who gathered us all in the church basement, to spread the good news about large chips in compact cameras—living a lie?
It’s tragic. One day, you’re a respected member of your industry. Then suddenly, you’re passed out in a seedy motel, wearing nothing but a frayed terry-cloth robe, surrounded by crumpled marketing plans.
Camera makers: There is still time to clean up, and save yourselves.
February 9, 2010
For example, he once ran a demonstration showing that random viewers couldn’t see much difference in a row of enormous, 16 x 24″ prints, even when the pixel counts varied wildly.
But Pogue made an odd aside last week, at the conclusion of his compact-camera buyer’s guide:
“As the ridiculous megapixel race winds down at last, …”
…a comment which left me scratching my head in confusion.
Perhaps he’s been busy—avalanched under press releases for all those new tablet e-readers. Or maybe he’s aggravated that the megapixel race didn’t stop at 7 Mp, as he hoped in 2006?
Believe me, I understand the frustration; the desire to throw up your hands, declare victory, and retreat.
But in reality, the megapixel war still rages—most obviously among point & shoot cameras. (And it’s the buyers of these mass-market models who are most likely to take advice from newspaper articles, rather than from some specialist geek website.)
Now, Pogue begins his compact-model roundup by noting some limitations inherent to all small cameras: shutter lag, grain, and blown highlights. But he hasn’t much followed his own oft-stated advice: Choose a camera based on its sensor size, not pixel count.
Seven of Pogue’s nine selections have pixels smaller than 1.54 microns. (The Nikon’s are a ludicrous 1.43 microns.) His Panasonic pick does a smidge better, at 1.56 µm.
But compare these to the 2.0 microns of the (still fairly compact) Canon S90—each of its pixels can collect about 70% more light.
His one choice I might grudgingly accept is the 10 Mp Fujifilm F70EXR. Besides having 1.77 micron pixels, this model offers a special low-light mode. Ironically, it works by pairing up pixels, turning it into a 5 megapixel camera! Hey, it’s a Pyrrhic victory, but I’ll take it.
But Pogue’s other picks simply have pixels that are too small, by any reasonable criterion.
I do admit that anyone forced to buy a compact digicam today—lets say your old one just died, and you’re leaving on a trip tomorrow—faces very limited choices.
If need be, you might hunt for a model using one of the new generation of 10 Mp back-illuminated CMOS sensors. For example, Sony’s “Exmor R” chip (versus the regular, non-R kind) works some special tricks to wring the most out of its 1.7 micron pixels.
This is actually rather worrying. Aren’t “enthusiast” photographers supposed to know better? That smaller pixels compromise other aspects of performance, like dynamic range and noise?
Stuffing 18 million pixels into the same 22.3 x 14.9mm sensor area makes each pixel 4.3 microns wide. This is the same pixel pitch that causes Micro Four Thirds cameras to struggle with noise when pushed up to ISO 800.
Consider the 12 Mp Pentax K-x, praised for its high-ISO performance. It uses 5.5 micron pixels instead. This gives each pixel 63% more light-gathering area.
Also remember that on the T2i’s sensor, each millimeter of sensor width contains 232 pixels.
But it is very rare for a real-world lens to resolve detail at that scale with reasonable contrast. If one can do so, it will only be at a single, optimum, middle f/stop. That’s not especially practical.
(Aberrations limit sharpness at wide f/stops; diffraction creates blur at smaller ones—in APS-C cameras, typically f/8 or smaller. For a more technical discussion, start here.)
I wish we could say that megapixel marketing madness had finally ended.
But I’m not seeing any evidence this is true.
February 8, 2010
Why jam extra megapixels into a compact camera, if its lens can’t resolve enough detail to use them?
Sampling a fuzzy image with an ever more finely-spaced pixel grid eventually stops adding information. After that, it merely balloons file sizes needlessly.
So it’s useful to check whether all of a camera’s pixels are capturing something real. Or do they simply hit a wall of lens aberrations, diffraction, and sensor noise?
I’ve had a chance to take some sample shots with the 8-megapixel Nikon Coolpix P60, using the resolution test target I posted last week. (Open a tab to remind yourself how the target is supposed to look.)
The P60 is assembled in China—perhaps even in a factory that doesn’t say “Nikon” over the door. Nonetheless, Nikon’s lens designers have an enviable reputation. And using a 9-element, 7-group design, its lens aberrations ought to be reasonably well controlled. So how well did the little Coolpix do?
As with my earlier post, I set up the target so that its squares with 40 divisions per inch match the pixel pitch of the sensor. At that magnification, ideally the camera should form an image of the 40-line target as a row of black pixels, then a row of white pixels, then black, etc.
The P60 shoots images that are 3,264 pixels wide. Dividing this by 40 tells us the subject field needs to be 81.6 inches wide overall—6 feet, 9–5/8 inches. Two reference marks at the proper spacing (black electrical tape on a sheet of plywood) helped me frame each shot with the right magnification.
Here’s a full-rez sample of what one complete test frame looks like. I turned off as many automatic settings as possible, to improve consistency (see notes* at end).
We’ll start with the best-case scenario: The lens is set at its sharpest focal length (at the wide end of the zoom range) and the target is in the center of the frame. The ISO is 80 (its lowest setting), for minimum noise. The aperture is wide open, for lowest diffraction:
Yes, this is the sharpest image I got in my tests.
While it’s startling to see the rainbow patterning in the 40-line sample, this is actually the “good” news. It means that enough resolution is being focused on the sensor for the test pattern to completely confuse the demosaicing algorithm.
We also see vertical and horizontal texture in the 50-line squares; but I believe this is “false texture” (aliasing), rather than true resolution.
(And please remember that most real-world subjects lack the kind of repeating patterns which make demosaicing totally freak out like this.)
The sharpness is not quite as good at longer focal lengths. Zooming to 14.3mm (corresponding to 81e) and backing away to maintain field size, things look like this:
The 50- and 40-line samples have lost most of their detail; also, the 30-line sample has begun to look a bit rougher. Note that the hairline border around the number boxes is virtually gone here—unlike the first shot which showed a hint of it.
We can also look at what happens towards the edges of the frame (where lens aberrations are generally not as well controlled). At a longer zoom setting of 23.3mm, a target at the photo’s right edge looked like this:
Well, there’s some rather troubling green fringing here. And even the 10-line sample has lost contrast noticeably.
But the other thing to notice is how soft the vertical 30-line square has gotten. It’s hard to avoid the conclusion that 8 megapixels is plenty at this point; more finely-spaced pixels would not capture any additional detail.
Now, traditionally photographers have enjoyed the creative control of trading off shutter speed against aperture; e.g. using longer exposures at smaller f/stops, to yield a deeper zone of sharp focus.
And the P60 is theoretically aimed at the enthusiast end of the point & shoot market—folks who would appreciate manual controls like this.
But, in fact, its aperture is only “sort of” adjustable.
Nikon’s manual notes (somewhat cryptically),
- Aperture: Electronically-controlled preset aperture and ND filter (–0.9 AV) selections
- Range: 2 steps (f/3.6 and f/8.5 [W])
What happens when you “stop down” this lens is that an arm swings into place with a smaller hole in it. And after inspecting this with a magnifier, the hole does appear to be covered by a rectangle of neutral-density filter material.
Combined, the filter and the hole cut out about 2.4 f/stops worth of light. But diameter-wise, the aperture is seemingly just f/6.0 or so—not the f/8.5 stated (at the zoom wide end).
Why on earth did Nikon do this? Well, it’s because stopping down the lens increases diffraction, that’s why. (And given compact cameras’ teeny focal lengths, you rarely need more depth of field.)
Despite this throttled aperture range, we can still see diffraction having a blurring effect:
First, notice the overall drop in contrast. The 50- and 40-line samples are completely featureless. And the 30-line sample has slipped past the limit of resolution—you can no longer count all of the lines.
At this zoom setting, the (physical) aperture might measure f/7.0; this means an Airy disk more than 9 microns wide. With the P60’s sensor size, those blur disks spill across many pixels.
While sharpening by the camera’s processor can accentuate the bright peak at the center of the Airy disk, it can’t pull back detail that never existed. So, if we want the ability to close down the lens even by two stops, then a sensor with larger pixels, not smaller ones, is needed.
Note that we’ve been looking exclusively at ISO 80 here—the camera’s lowest sensitivity setting. But that’s not very realistic, considering how people actually use their cameras.
With a shirt-pocket compact, we would rarely feel like lugging around a tripod! So under anything but bright daylight, we’ll often need to use a higher ISO setting.
Fifteen years ago, films of ISO 400 were the most commonly purchased speed. So how does ISO 400 look here?
The 40-line sample does show some color tint from demosaicing; but the noise (and noise-reduction processing) are severe here. Neither the 40- or 50-line samples give any hint which direction the lines run.
The 30-line squares have once again passed the point where lines cannot be resolved completely. And there’s no sign of the hairlines bordering the number boxes.
At this level of resolution, we would hardly lose any detail if we substituted a 5 megapixel sensor of the same size. Plus in that case, each pixel would have 60% more light-gathering area—helping tame the noise.
In conclusion, the P60’s lens somewhat out-resolves the sensor under the most favorable circumstances. This is seen in the form of colored, “gritty” demosaicing artifacts.
But it doesn’t take long before real-world complications undercut the sensor’s inherent resolving power. And while we’ve treated aberrations, diffraction, and noise as separate, in practice several of these handicaps often come together in the same photograph (along with other factors such as camera shake).
This test is not definitive; it merely represents the performance of one, very average compact camera. However if we are seeing such flaws at “only” 8 megapixels, what sense does it make to drive up pixel counts even higher—to today’s 10, 12 or 14 Mp?
You can download a PDF of the target here. If you own a compact camera, I encourage you to try this test for yourself.
* Test setup: The camera was mounted on a tripod, with VR turned off. Unless noted otherwise, enlarged details are from the center of the frame, with the camera set at ISO 80 (best-case conditions).
I set ISO, aperture, and shutter speed manually. The white balance was set to “cloudy,” and the contrast setting was turned up to +1. I left sharpening and saturation at their mid setting, 0. A 2-second self-timer allowed vibrations to die out after I pressed the shutter release.
Autofocus used the central spot only; this included several of the target’s black squares. Two shots were taken at each setting, allowing the camera to re-focus each time (I never noticed any inconsistency between pairs of photos).
The 200% samples shown here were upsized using Photoshop’s “nearest neighbor” method, to avoid any additional artifacts. Any sharpening halos are from the camera’s own processing.
January 29, 2010
My Petavoxel post about the “Great Megapixel Swindle,” from January 19th, has now been viewed more than 60,000 times. (Which I find kind of scary.)
And there were lots of noisy comments elsewhere, complaining that I had stacked the deck unfairly. I made a teensy crop from a cheap camera—and no surprise, it looked bad!
Fair enough. So let’s start over, stacking the deck as much as possible in favor of the photo looking good. Please look at a couple of new images:
(I am grateful to both gentlemen for posting these images under Creative Commons licensing. Paulo was also kind enough to email me his straight-from-camera JPEG; it’s the source of the crops below.)
I choose these because the camera used to take both is the Panasonic Lumix TZ5. This was among the final high-end point & shoot models to receive a full profile from DP Review; so you can have an independent opinion of its quality.
A 9 megapixel model from 2008, the TZ5 has now been replaced by a zoomier 12 Mp version. But the earlier TZ5 remains the second most popular Lumix on Flickr. Its pixels measure about 1.7 microns wide—that’s actually on the large side, compared to today’s typical compacts.
DP Review were rather enthusiastic about this camera, praising the lens in particular. It’s a Leica-branded 10x zoom, with 4 aspheric surfaces and 11 elements, including one of ED glass. This is some rather high-end stuff.
I’d be the first to admit our sample photos look crisp and vibrant at any normal viewing size. And both Paulo and Simon tell me they’re quite pleased with their TZ5’s.
Note that these well-lit shots show off the TZ5 under the best possible circumstances. The sensitivity is at ISO 100 (the lowest setting), which gives the least noise. The shutter speeds are easily high enough to freeze any hand shake at the focal lengths used. But the apertures have been held to the widest possible, which minimizes diffraction.
Even with all these advantages, we begin to see some artifacts that any compact-sensor camera is prone to.
Within the same photo, a white surface in sunlight can be 500 times brighter than a dark surface in shade. That’s about 9 f/stops of difference.
And smaller pixels inherently have lower dynamic range. So even in well-exposed photos like these two, the highlight areas can blow out to a textureless white:
Examining a highlight area in Photoshop, we see that the brightness levels in the selection have completely “hit the ceiling,” so that no detail can be recovered:
So small-sensor cameras will always struggle with high-contrast scenes (unless some tricky HDR processing is used). Yes, shooting in RAW could rescue a bit more from the highlights; but I don’t know of any camera under $350 which offers this option.
What about graininess in the image? (Digital-camera fans use the word “noise,” since we’re discussing electronic signals.)
Even under uniform illumination, photon counts can vary randomly between neighboring pixels. The larger the pixels, the more this averages out. Small pixels inherently have more random brightness variations—that is, higher noise.
So any compact camera must devote some fraction of its processor power to noise reduction, attempting to smooth out those speckles.
Selecting higher ISO sensitivity settings means the pixel noise is amplified even more; and so, noise-reduction must get even more aggressive. Ultimately, this causes a loss of detail, as DP Review’s TZ5 test clearly showed (scroll down to the hair sample, especially).
But our ISO 100 samples should be best-case scenario for noise. Yet even so, the NR processing is leaving some “false detail” in areas that should look smooth:
The effect is stronger in the darker parts of an image. And as you raise ISO, it will just become more obvious.
Might these simply be JPEG artifacts?
These samples are not highly compressed files—the JPEGs are about 4 MB each. Compression with JPEG can add visible “sparklies” around high-contrast edges; but flat areas of color should compress very cleanly. I say this texture comes from noise (or, noise reduction).
A camera of 9 megapixels would seem to promise rather high resolution. But this camera’s sensor is about 1/3rd the area of an aspirin tablet. So is there truly any detail for all those pixels to resolve?
Lets look at some 100% crops near the center of the photos, where lens performance ought to be at its best.
The high-contrast edges here show a lot of “snap.” But the camera’s own sharpening algorithm may take some of the credit for that.
Looking at the lower-contrast details in the shadows (like the hanging tags), the impression is noticeably softer. I feel that the optical resolution is starting to fall apart before we reach individual pixels.
The TZ5 lens gains a whole f/stop when zoomed to the wide end. At this setting, detail is a bit softer still—notice the grille in the archway. At a 4.7mm focal length, depth of field is enormous; so I don’t believe we’re seeing focus error here.
Note that the Airy disk at f/3.3 is about 4.4 microns (for green light). As I discussed before, diffraction means that no lens, no matter how flawless, can focus a pinpoint of light any smaller than this.
But a 4.4 micron-wide blur covers almost four of the TZ5’s pixels. And at any smaller aperture, the Airy disk expands even further. Thus, I remain skeptical that packing in pixels more densely than in the TZ5 would extract more true, optical detail over what we can see here.
Now, let me be clear.
Paulo tells me, “I love my TZ5. It’s my everyday use camera (being much lighter and more compact than the 500D).”
There’s a need for cameras like this. Among pocketable models, it’s probably among the nicest available.
But even under these best-case circumstances, we start to see some limits imposed by its 1.7 micron pixels. And since the TZ5’s release in 2008, pixels in point & shoots have only shrunken more.
It’s not an accident that DP Review made the final winner of their “enthusiast” roundup Panasonic’s own LX3—a small camera, but with a larger sensor. Its pixel density of 24 Mp per sq. cm is half of today’s worst-case models. The result is that the LX3’s “high ISO performance puts most competitors to shame.”
I’m not against small cameras. I’m not even against high megapixels, if you have a genuine need for them (assuming the sensor is large enough).
But today, pixels have shrunk too far. It’s time to stop.
January 26, 2010
Even as I try to warn people away from high-megapixel point & shoots, some commenters have despaired that my advice isn’t very practical: Virtually every model sold today is 10 Mp or higher.
There’s a reason the last “sensible” point & shoot, the beloved Fujifilm F30/F31, actually went up in price after being discontinued. And even three years later, a used one can change hands for over $200. This despite the fact that aside from the superior 6 Mp sensor, the F30/F31 models were otherwise totally ordinary.
But all of us do need small, take-everywhere pocket cameras. Your cell phone cam can only do so much, with its grainy images and lack of controls.
So what specs would a point & shoot need to have, before I could recommend it? Well, here’s some thoughts—or perhaps just a poignant yearning for the impossible.
The recent Canon S90 proves it’s physically possible to put an f/2.0 lens and a 1/1.7″ sensor into a pocket-sized camera. It’s not quite as slim as Canon’s old Elphs. But it’s still smaller than the venerable 1979 Olympus XA, the breakthrough model which first showed the world how fine a shirt-pocket camera could be.
The S90 includes All Mod Cons: A jumbo 461,100-dot LCD; face detection; RAW option; and a movie mode (though not in HD, weirdly).
The so-called 1/1.7″ sensor size is actually 7.6 x 5.7 mm. The S90’s pixel density of 23 Mp/sq. cm means each pixel is about 2.1 microns wide.
Canon gets credit for scaling back the megapixel count in some recent cameras, compared to earlier models (the S90 is 10 Mp). But to my thinking, diffraction and noise still make 2.1 micron pixels pretty borderline. But is that a problem for the image quality, really?
Well, DPReview hasn’t tested the S90 yet. However its big sister, the Canon G11, recently got the full DP Review workup. And since both cameras apparently share the same sensor and Digic 4 processor chip, the results should be similar.
Unfortunately when you look at the tests at different ISOs (scroll down to the green feathers), you see that by ISO 800, the noise-suppressing algorithm is also blurring away lots of fine detail. And Canon is actually overstating its “800” speed slightly—in reality it’s closer to 640.
For our daily snapshots we rarely need a bazillion megapixels, not for any real-world use. You can make crisp 8×10″ prints, or get a nice magnified view on your computer, with only 6 Mp. (And even then, you’d still be ahead of James Cameron!)
Let’s say you took the chip dimensions of the S90, but held it down to 2828 x 2121 pixels (6 Mp total). Each pixel would be 2.7 microns wide—65% more area than those in the S90. That’s a significant difference. High ISOs wouldn’t need such aggressive anti-noise smoothing then.
But-but-but… Fewer pixels! Wouldn’t you lose detail doing that? No—at least not at any smaller lens opening than f/4.5. By that point, diffraction blur is much larger than 2.1-micron pixels.
Could we also hope that dropping to 6 Mp would also knock some bucks off the S90’s $400 price? That’s awfully steep, considering how inexpensive an entry-level DSLR is today.
Okay, here’s my next crackpot request: No zoom lens.
I realize from a marketing point of view, this sounds insane. Isn’t a 4x zoom better than a 3x zoom, and a 12x zoom best of all?
The problem with a zoom is, as jack-of-all-trades, it is master of none. A zoom is inevitably larger than a single-focal-length lens, and not always as sharp.
But the biggest problem is that zooms cripple the maximum aperture. For many typical ones, f/3.5 is the brightest f/stop. (Yes, they might be a little better at the widest zoom setting. But when a lens is labeled something like f=8–24, 1:2.8–5.9, the latter numbers tell how the the widest f/stop dims as you zoom in.)
You can make a single-focal-length lens (often known as a “prime”) much faster. Like two stops brighter. I think a smart marketer might get some mileage out of the promise, “gather four times as much light!”
A small image format actually makes it easier to design a fast lens, compared to e.g., one for DSLRs. If you look into closed-circuit television cameras, you’ll discover oodles of f/1.4 or even f/1.2 lenses that cost less than 90 bucks.
If you compare how much sharper a point & shoot image looks at ISO 200 versus 800, it raises an interesting thought. Your extra two stops of lens brightness might let you crop the image quite a bit harder, while still getting adequate detail. In effect, you could “zoom” after the fact, at home on the computer.
A wide maximum f/stop would also let you throw backgrounds a bit out of focus, if desired—a tool creative photographers appreciate. (Admittedly, any DSLR will be better at this.)
And lenses starting from f/1.7 would permit a greater range of possible apertures, before you hit the diffraction limit (about f/5.6 or smaller, with 2.7 µm pixels) and begin to lose sharpness.
But I admit, the no-zoom option is probably something grandma wouldn’t go for.
So that can be the special “Petavoxel” edition. And believe me, I’d pay for it.