February 11, 2010
During the past decade, the world of digital cameras has obviously gone through numerous changes.
Now, the aspect I’ve written about most here is the endless (and problematic) escalation of pixel counts. But we should remember that many other facets of camera evolution have been going on in parallel.
Today we can only shake our heads at the postage-stamp LCD screens which were once the norm on digital cameras. And autofocus technology continues to improve (although cameras can still frustrate us, making us miss shots of moving subjects).
Moore’s Law has raced onwards. The result is that the proprietary image-processing chips used in each camera get increased “horsepower” with each new generation.
Besides keeping up with the growing image-file sizes, this allows more elaborate sharpening and noise-reduction methods to be applied to each photo. (Whether this noise suppression creates weird and unnatural artifacts is still a question.)
And there are other changes which have helped offset the impact of megapixel escalation. Chip design has improved, reducing the surface area lost to wiring connections. Sensors today are also usually topped with a grid of microlenses, helping funnel most of the light striking the chip onto the active photodetectors.
At the beginning of the digital-camera revolution, CMOS sensors were a bit less developed than CCDs (which had been used in scientific applications for some time). But eventually the new challenges of CMOS technology got ironed out. Today, the DSLRs which lead their market segments all use CMOS sensors.
Not every camera maker is on the same footing, technologically. Companies control different patent portfolios. Many lack their own in-house chip fabs, which can help move innovations to market faster.
So within a given class of cameras (e.g. a particular pixel size), you can still discover performance differences.
But the sum total of all this technology change has been that the better-designed cameras have been able to maintain and even improve image quality, even as pixel pitch continued to shrink.
Can technology keep saving us? Will progress continue forever?
I dispute that it’s even desirable to decrease pixel size further still. But one question is whether there is still some headroom left in sensor technology—allowing sensitivity per unit area to keep increasing. That could compensate for the shrinking area of each pixel.
Well, there are some important things to remember.
The first is that every pixel in a camera sensor is covered by a filter in one of three colors (the Bayer array). And these exist for the purpose of blocking roughly two-thirds of the visible light spectrum.
(There was a Kodak proposal from 2007 for sensors including some “clear” pixels, which would avoid this. But that creates other problems, and I’m not aware of any shipping product based on it.)
The other issue is that there’s a hard theoretical ceiling on how sensitive any photoelectric element can be, no matter its technology. How close a chip approaches that limit is called its quantum efficiency. And out of a theoretically perfect 100%, real sensors today get surprisingly close.
Considering monochrome scientific chips (i.e., no Bayer array), the best conventional microlensed models can average roughly 60% QE in the visible spectrum.
Astrophotographers worry quite a lot about QE. And a well-known one based in France, Christian Buil, actually tested and graphed the QE of various Canon DSLRs. Note, for example, that the Canon 5D did improve its green-channel sensitivity quite a bit in the Mk II version.
In Buil’s bottom graphs notice how much the Bayer color filters limit Canon’s QE compared to one top-of-the-line Kodak monochrome sensor. (The KAF-3200ME has microlenses and 6.8 micron pixels.)
So, seemingly, one area of possible improvement could be improving the color filters used in the Bayer array.
But tri-color filters are a mature technology—having had numerous uses in photography in the many decades before digital. To insure accurate color response, you must design a dye which attenuates little of the desired band; but blocks very effectively outside it. Dyes must also remain colorfast as they age or are exposed to light. Basically, it’s a chemistry problem—and a surprisingly difficult one.
Considering all this, the ability to reach 35% QE (on a pixel basis) in a color-filtered chip is a pretty decent showing already.
Now for years, scientific imagers have used a special trick of back-illuminating a CCD. This can push QE up to 90% in the photographic spectrum (roughly 400-700 nm) on an unfiltered chip.
And suddenly, camera makers have “invented” the same idea for photography applications. Sony is talking about tiny, point & shoot pixels here, which lose significant area to their opaque structures. So a “nearly twofold” efficiency boost might be feasible in that case.
But we saw that when pixels are larger, back illumination can only improve QE from about 60% to 90% (before filtering).
And it’s much more expensive to fabricate a chip right-side up, flip it over, and carefully thin away the substrate to the exact dimension required. Yields are lower; so when you try to scale it up to larger chips, costs are high. It’s not clear whether this will really be an economical option for DSLR-sized sensors.
But wouldn’t it be a massive breakthrough to add 50% more light-gathering ability?
Actually, less than you might think. Remember, that’s only half an f/stop. You get more improvement e.g. in switching from a Four Thirds sensor to an APS-C model, just from the area increase.
So back-illumination is an improvement worth pursuing—especially for cameras using the teeniest chips, which are the most handicapped by undersized pixels.
But beyond that, we start to hit serious limits.
Pure sensor size remains the most important factor in digital-photo image quality. And no geeky wizardry is likely to change that soon.
February 9, 2010
For example, he once ran a demonstration showing that random viewers couldn’t see much difference in a row of enormous, 16 x 24″ prints, even when the pixel counts varied wildly.
But Pogue made an odd aside last week, at the conclusion of his compact-camera buyer’s guide:
“As the ridiculous megapixel race winds down at last, …”
…a comment which left me scratching my head in confusion.
Perhaps he’s been busy—avalanched under press releases for all those new tablet e-readers. Or maybe he’s aggravated that the megapixel race didn’t stop at 7 Mp, as he hoped in 2006?
Believe me, I understand the frustration; the desire to throw up your hands, declare victory, and retreat.
But in reality, the megapixel war still rages—most obviously among point & shoot cameras. (And it’s the buyers of these mass-market models who are most likely to take advice from newspaper articles, rather than from some specialist geek website.)
Now, Pogue begins his compact-model roundup by noting some limitations inherent to all small cameras: shutter lag, grain, and blown highlights. But he hasn’t much followed his own oft-stated advice: Choose a camera based on its sensor size, not pixel count.
Seven of Pogue’s nine selections have pixels smaller than 1.54 microns. (The Nikon’s are a ludicrous 1.43 microns.) His Panasonic pick does a smidge better, at 1.56 µm.
But compare these to the 2.0 microns of the (still fairly compact) Canon S90—each of its pixels can collect about 70% more light.
His one choice I might grudgingly accept is the 10 Mp Fujifilm F70EXR. Besides having 1.77 micron pixels, this model offers a special low-light mode. Ironically, it works by pairing up pixels, turning it into a 5 megapixel camera! Hey, it’s a Pyrrhic victory, but I’ll take it.
But Pogue’s other picks simply have pixels that are too small, by any reasonable criterion.
I do admit that anyone forced to buy a compact digicam today—lets say your old one just died, and you’re leaving on a trip tomorrow—faces very limited choices.
If need be, you might hunt for a model using one of the new generation of 10 Mp back-illuminated CMOS sensors. For example, Sony’s “Exmor R” chip (versus the regular, non-R kind) works some special tricks to wring the most out of its 1.7 micron pixels.
This is actually rather worrying. Aren’t “enthusiast” photographers supposed to know better? That smaller pixels compromise other aspects of performance, like dynamic range and noise?
Stuffing 18 million pixels into the same 22.3 x 14.9mm sensor area makes each pixel 4.3 microns wide. This is the same pixel pitch that causes Micro Four Thirds cameras to struggle with noise when pushed up to ISO 800.
Consider the 12 Mp Pentax K-x, praised for its high-ISO performance. It uses 5.5 micron pixels instead. This gives each pixel 63% more light-gathering area.
Also remember that on the T2i’s sensor, each millimeter of sensor width contains 232 pixels.
But it is very rare for a real-world lens to resolve detail at that scale with reasonable contrast. If one can do so, it will only be at a single, optimum, middle f/stop. That’s not especially practical.
(Aberrations limit sharpness at wide f/stops; diffraction creates blur at smaller ones—in APS-C cameras, typically f/8 or smaller. For a more technical discussion, start here.)
I wish we could say that megapixel marketing madness had finally ended.
But I’m not seeing any evidence this is true.
February 8, 2010
Why jam extra megapixels into a compact camera, if its lens can’t resolve enough detail to use them?
Sampling a fuzzy image with an ever more finely-spaced pixel grid eventually stops adding information. After that, it merely balloons file sizes needlessly.
So it’s useful to check whether all of a camera’s pixels are capturing something real. Or do they simply hit a wall of lens aberrations, diffraction, and sensor noise?
I’ve had a chance to take some sample shots with the 8-megapixel Nikon Coolpix P60, using the resolution test target I posted last week. (Open a tab to remind yourself how the target is supposed to look.)
The P60 is assembled in China—perhaps even in a factory that doesn’t say “Nikon” over the door. Nonetheless, Nikon’s lens designers have an enviable reputation. And using a 9-element, 7-group design, its lens aberrations ought to be reasonably well controlled. So how well did the little Coolpix do?
As with my earlier post, I set up the target so that its squares with 40 divisions per inch match the pixel pitch of the sensor. At that magnification, ideally the camera should form an image of the 40-line target as a row of black pixels, then a row of white pixels, then black, etc.
The P60 shoots images that are 3,264 pixels wide. Dividing this by 40 tells us the subject field needs to be 81.6 inches wide overall—6 feet, 9–5/8 inches. Two reference marks at the proper spacing (black electrical tape on a sheet of plywood) helped me frame each shot with the right magnification.
Here’s a full-rez sample of what one complete test frame looks like. I turned off as many automatic settings as possible, to improve consistency (see notes* at end).
We’ll start with the best-case scenario: The lens is set at its sharpest focal length (at the wide end of the zoom range) and the target is in the center of the frame. The ISO is 80 (its lowest setting), for minimum noise. The aperture is wide open, for lowest diffraction:
Yes, this is the sharpest image I got in my tests.
While it’s startling to see the rainbow patterning in the 40-line sample, this is actually the “good” news. It means that enough resolution is being focused on the sensor for the test pattern to completely confuse the demosaicing algorithm.
We also see vertical and horizontal texture in the 50-line squares; but I believe this is “false texture” (aliasing), rather than true resolution.
(And please remember that most real-world subjects lack the kind of repeating patterns which make demosaicing totally freak out like this.)
The sharpness is not quite as good at longer focal lengths. Zooming to 14.3mm (corresponding to 81e) and backing away to maintain field size, things look like this:
The 50- and 40-line samples have lost most of their detail; also, the 30-line sample has begun to look a bit rougher. Note that the hairline border around the number boxes is virtually gone here—unlike the first shot which showed a hint of it.
We can also look at what happens towards the edges of the frame (where lens aberrations are generally not as well controlled). At a longer zoom setting of 23.3mm, a target at the photo’s right edge looked like this:
Well, there’s some rather troubling green fringing here. And even the 10-line sample has lost contrast noticeably.
But the other thing to notice is how soft the vertical 30-line square has gotten. It’s hard to avoid the conclusion that 8 megapixels is plenty at this point; more finely-spaced pixels would not capture any additional detail.
Now, traditionally photographers have enjoyed the creative control of trading off shutter speed against aperture; e.g. using longer exposures at smaller f/stops, to yield a deeper zone of sharp focus.
And the P60 is theoretically aimed at the enthusiast end of the point & shoot market—folks who would appreciate manual controls like this.
But, in fact, its aperture is only “sort of” adjustable.
Nikon’s manual notes (somewhat cryptically),
- Aperture: Electronically-controlled preset aperture and ND filter (–0.9 AV) selections
- Range: 2 steps (f/3.6 and f/8.5 [W])
What happens when you “stop down” this lens is that an arm swings into place with a smaller hole in it. And after inspecting this with a magnifier, the hole does appear to be covered by a rectangle of neutral-density filter material.
Combined, the filter and the hole cut out about 2.4 f/stops worth of light. But diameter-wise, the aperture is seemingly just f/6.0 or so—not the f/8.5 stated (at the zoom wide end).
Why on earth did Nikon do this? Well, it’s because stopping down the lens increases diffraction, that’s why. (And given compact cameras’ teeny focal lengths, you rarely need more depth of field.)
Despite this throttled aperture range, we can still see diffraction having a blurring effect:
First, notice the overall drop in contrast. The 50- and 40-line samples are completely featureless. And the 30-line sample has slipped past the limit of resolution—you can no longer count all of the lines.
At this zoom setting, the (physical) aperture might measure f/7.0; this means an Airy disk more than 9 microns wide. With the P60’s sensor size, those blur disks spill across many pixels.
While sharpening by the camera’s processor can accentuate the bright peak at the center of the Airy disk, it can’t pull back detail that never existed. So, if we want the ability to close down the lens even by two stops, then a sensor with larger pixels, not smaller ones, is needed.
Note that we’ve been looking exclusively at ISO 80 here—the camera’s lowest sensitivity setting. But that’s not very realistic, considering how people actually use their cameras.
With a shirt-pocket compact, we would rarely feel like lugging around a tripod! So under anything but bright daylight, we’ll often need to use a higher ISO setting.
Fifteen years ago, films of ISO 400 were the most commonly purchased speed. So how does ISO 400 look here?
The 40-line sample does show some color tint from demosaicing; but the noise (and noise-reduction processing) are severe here. Neither the 40- or 50-line samples give any hint which direction the lines run.
The 30-line squares have once again passed the point where lines cannot be resolved completely. And there’s no sign of the hairlines bordering the number boxes.
At this level of resolution, we would hardly lose any detail if we substituted a 5 megapixel sensor of the same size. Plus in that case, each pixel would have 60% more light-gathering area—helping tame the noise.
In conclusion, the P60’s lens somewhat out-resolves the sensor under the most favorable circumstances. This is seen in the form of colored, “gritty” demosaicing artifacts.
But it doesn’t take long before real-world complications undercut the sensor’s inherent resolving power. And while we’ve treated aberrations, diffraction, and noise as separate, in practice several of these handicaps often come together in the same photograph (along with other factors such as camera shake).
This test is not definitive; it merely represents the performance of one, very average compact camera. However if we are seeing such flaws at “only” 8 megapixels, what sense does it make to drive up pixel counts even higher—to today’s 10, 12 or 14 Mp?
You can download a PDF of the target here. If you own a compact camera, I encourage you to try this test for yourself.
* Test setup: The camera was mounted on a tripod, with VR turned off. Unless noted otherwise, enlarged details are from the center of the frame, with the camera set at ISO 80 (best-case conditions).
I set ISO, aperture, and shutter speed manually. The white balance was set to “cloudy,” and the contrast setting was turned up to +1. I left sharpening and saturation at their mid setting, 0. A 2-second self-timer allowed vibrations to die out after I pressed the shutter release.
Autofocus used the central spot only; this included several of the target’s black squares. Two shots were taken at each setting, allowing the camera to re-focus each time (I never noticed any inconsistency between pairs of photos).
The 200% samples shown here were upsized using Photoshop’s “nearest neighbor” method, to avoid any additional artifacts. Any sharpening halos are from the camera’s own processing.
February 7, 2010
As I noted in an earlier post, camera makers quote sensor sizes in mystifying “fractional inch” designations. They’re much less forthcoming in giving us the actual, active dimensions of the chip.
Is this because they’re embarrassed? Even a throwaway Kodak Fun Saver uses the generous dimensions of 35mm film; while today’s $300 digital compacts might use a chip with only 3% of that area.
The common 1/2.33″ or 1/2.5″ sensors used in current point & shoots measure roughly 6mm across. That’s, you know… not big:
Now, even when you don’t have any “official” specs about the chip used in a camera, it’s usually possible to work out the sensor dimensions indirectly.
All you need is the actual focal length(s) of the camera lens; plus the manufacturer’s stated “35mm equivalents.”
Here’s a camera marked with its true, optical focal lengths. (When the smaller number is under 10mm, you’re seeing true, not “equivalent” focal lengths.)
The first thing we need to know is that “equivalency” is usually based on the diagonal angle of view of the lens. The next point is that (true) focal lengths scale directly in proportion to the dimensions of the image format.
A frame of 35mm film has these dimensions:
Notice that film’s 43.3mm diagonal is a smaller number than the 70mm “equivalent” f.l. that was quoted for the long end of the zoom range. Telephoto focal lengths will always be longer than the image diagonal.
So, the digital sensor’s diagonal must also be smaller than the lens’s true focal length when zoomed in: 10.8mm.
Divide 43.3 by 70 and you get 0.62; multiply 10.8mm by that and you get 6.7mm as the diagonal of the sensor chip.
Likewise: 43.3 divided by 35 = 1.24; multiply 5.4mm by that and you also get 6.7mm for the diagonal.
But wait, that’s not so useful—didn’t we want to know the chip’s width and height?
Well, compact cameras almost always use 4:3 image proportions (the old “television” aspect ratio). And so, conveniently, the diagonal has a nice easy-to-remember relationship to the sides.
In other words, the chip is 60% as tall as the diagonal; and it’s 80% as wide.
So for the sensor we’re talking about, a 6.7mm diagonal means it’s about 5.3mm wide and 4.0mm tall. This is what the industry calls a 1/2.7″ chip size.
And that’s a lot smaller than Lincoln’s head.
February 5, 2010
It’s not the greatest camera in its class, and it’s not the worst. For a pocketable 8 megapixel model, the P60 is probably about average.
Its pixels are about 1.9 microns across; today, 1.5 or 1.4 microns has become the norm. In other words, the P60’s pixels have 60-80% more light-gathering area than the ones used in a typical 2010 compact.
So let’s take a quick look at how well it handles noise.
We’ll start with an image where the camera is set to ISO 80, the lowest available. And I’ve zoomed to 135e, for a close view of my (charming) model:
This is the high-quality version, to show us what the textures and details ought to look like. (Although notice that the P60, like any small-pixel camera, is struggling to keep the highlights from blowing out.)
Now we zoom out the lens, and look at some detail crops to see how well the image quality holds up. Here’s the same view of the subject, at ISO 80, ISO 200, and ISO 800 (these are now crops using about 40% of the frame width).
It’s not a huge surprise that ISO 800 looks very grainy:
The top of the camera no longer shows its original texture; any apparent detail is just the noise itself.
While at ISO 80 you could still read “München Germany” below the lens, that detail is gone now.
But let’s give Nikon some credit: Chroma noise is very well controlled here, so the speckles do not have distracting “rainbow confetti” colors. Aesthetically, this noise is fairly inoffensive.
However, what may be more worrying is that even at a moderate ISO 200, we still see some anomalies:
Instead of obvious noise, the issue is more subtle here: Rather than looking entirely photographic, the image almost begins to look painted. Noise-reduction processing has kicked in, even at a fairly low ISO—and it’s adding some of its own odd artifacts.
Remember—ideally, it’s supposed to look like this:
Of course, this “painterly” impression is much less noticeable at any reasonable viewing size. We’re pixel-peeping here.
Yet there’s something troubling about a camera that re-draws your photographs—even if it does so very tastefully.
February 3, 2010
I was just skimming a press release for a brand new (and silly) Nikon superzoom model, when I came across this tidbit:
Additional features of the Nikon COOLPIX P100 digital camera include:
10.3-megapixels and Backside Illumination CMOS Sensor for stunning prints as large as 16x 20 inches, while retaining fine detail
Ignoring the odd syntax, lets think about that for a second.
The Coolpix P100 creates files of 3648 x 2736 pixels, at its highest resolution.
Traditional print formats are in a 4:5 ratio; so what matters is the short dimension of the frame. That is, it’s the 2736 pixels getting enlarged to 16 inches wide.
So, Nikon feels that 171 pixels per inch are sufficient to make a print “while retaining fine detail.”
Okay, let’s take Nikon’s word on that.
In that case, let’s say we have some worthless old 6 megapixel camera. It produces 2848 x 2136 pixel images.
According to Nikon, it would be fine to print up to 12″ x 15″—with pixels to spare. (And how frequently do you make a print that large?)
Or take some 3 Mp model, from the Paleozoic era (a.k.a. seven years ago). Its images are 2048 x 1536 pixels.
Still good for 8″ x 10″ prints—with wiggle room to crop.
Is Nikon really convincing us to buy a new camera here?
January 29, 2010
My Petavoxel post about the “Great Megapixel Swindle,” from January 19th, has now been viewed more than 60,000 times. (Which I find kind of scary.)
And there were lots of noisy comments elsewhere, complaining that I had stacked the deck unfairly. I made a teensy crop from a cheap camera—and no surprise, it looked bad!
Fair enough. So let’s start over, stacking the deck as much as possible in favor of the photo looking good. Please look at a couple of new images:
(I am grateful to both gentlemen for posting these images under Creative Commons licensing. Paulo was also kind enough to email me his straight-from-camera JPEG; it’s the source of the crops below.)
I choose these because the camera used to take both is the Panasonic Lumix TZ5. This was among the final high-end point & shoot models to receive a full profile from DP Review; so you can have an independent opinion of its quality.
A 9 megapixel model from 2008, the TZ5 has now been replaced by a zoomier 12 Mp version. But the earlier TZ5 remains the second most popular Lumix on Flickr. Its pixels measure about 1.7 microns wide—that’s actually on the large side, compared to today’s typical compacts.
DP Review were rather enthusiastic about this camera, praising the lens in particular. It’s a Leica-branded 10x zoom, with 4 aspheric surfaces and 11 elements, including one of ED glass. This is some rather high-end stuff.
I’d be the first to admit our sample photos look crisp and vibrant at any normal viewing size. And both Paulo and Simon tell me they’re quite pleased with their TZ5’s.
Note that these well-lit shots show off the TZ5 under the best possible circumstances. The sensitivity is at ISO 100 (the lowest setting), which gives the least noise. The shutter speeds are easily high enough to freeze any hand shake at the focal lengths used. But the apertures have been held to the widest possible, which minimizes diffraction.
Even with all these advantages, we begin to see some artifacts that any compact-sensor camera is prone to.
Within the same photo, a white surface in sunlight can be 500 times brighter than a dark surface in shade. That’s about 9 f/stops of difference.
And smaller pixels inherently have lower dynamic range. So even in well-exposed photos like these two, the highlight areas can blow out to a textureless white:
Examining a highlight area in Photoshop, we see that the brightness levels in the selection have completely “hit the ceiling,” so that no detail can be recovered:
So small-sensor cameras will always struggle with high-contrast scenes (unless some tricky HDR processing is used). Yes, shooting in RAW could rescue a bit more from the highlights; but I don’t know of any camera under $350 which offers this option.
What about graininess in the image? (Digital-camera fans use the word “noise,” since we’re discussing electronic signals.)
Even under uniform illumination, photon counts can vary randomly between neighboring pixels. The larger the pixels, the more this averages out. Small pixels inherently have more random brightness variations—that is, higher noise.
So any compact camera must devote some fraction of its processor power to noise reduction, attempting to smooth out those speckles.
Selecting higher ISO sensitivity settings means the pixel noise is amplified even more; and so, noise-reduction must get even more aggressive. Ultimately, this causes a loss of detail, as DP Review’s TZ5 test clearly showed (scroll down to the hair sample, especially).
But our ISO 100 samples should be best-case scenario for noise. Yet even so, the NR processing is leaving some “false detail” in areas that should look smooth:
The effect is stronger in the darker parts of an image. And as you raise ISO, it will just become more obvious.
Might these simply be JPEG artifacts?
These samples are not highly compressed files—the JPEGs are about 4 MB each. Compression with JPEG can add visible “sparklies” around high-contrast edges; but flat areas of color should compress very cleanly. I say this texture comes from noise (or, noise reduction).
A camera of 9 megapixels would seem to promise rather high resolution. But this camera’s sensor is about 1/3rd the area of an aspirin tablet. So is there truly any detail for all those pixels to resolve?
Lets look at some 100% crops near the center of the photos, where lens performance ought to be at its best.
The high-contrast edges here show a lot of “snap.” But the camera’s own sharpening algorithm may take some of the credit for that.
Looking at the lower-contrast details in the shadows (like the hanging tags), the impression is noticeably softer. I feel that the optical resolution is starting to fall apart before we reach individual pixels.
The TZ5 lens gains a whole f/stop when zoomed to the wide end. At this setting, detail is a bit softer still—notice the grille in the archway. At a 4.7mm focal length, depth of field is enormous; so I don’t believe we’re seeing focus error here.
Note that the Airy disk at f/3.3 is about 4.4 microns (for green light). As I discussed before, diffraction means that no lens, no matter how flawless, can focus a pinpoint of light any smaller than this.
But a 4.4 micron-wide blur covers almost four of the TZ5’s pixels. And at any smaller aperture, the Airy disk expands even further. Thus, I remain skeptical that packing in pixels more densely than in the TZ5 would extract more true, optical detail over what we can see here.
Now, let me be clear.
Paulo tells me, “I love my TZ5. It’s my everyday use camera (being much lighter and more compact than the 500D).”
There’s a need for cameras like this. Among pocketable models, it’s probably among the nicest available.
But even under these best-case circumstances, we start to see some limits imposed by its 1.7 micron pixels. And since the TZ5’s release in 2008, pixels in point & shoots have only shrunken more.
It’s not an accident that DP Review made the final winner of their “enthusiast” roundup Panasonic’s own LX3—a small camera, but with a larger sensor. Its pixel density of 24 Mp per sq. cm is half of today’s worst-case models. The result is that the LX3’s “high ISO performance puts most competitors to shame.”
I’m not against small cameras. I’m not even against high megapixels, if you have a genuine need for them (assuming the sensor is large enough).
But today, pixels have shrunk too far. It’s time to stop.