Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Myth busted: small pixels have worse performance than large pixels

My analysis of ISO

My analysis of ISO

http://forums.dpreview.com/forums/read.asp?forum=1018&message=23344542

Summary:

If microlenses are not used, then the ISO depends only on the silicon and circuit-design technology used, and is independent of the Pixel size.

If microlenses ARE used, then smaller pixels could have higher ISO than larger pixels.

But larger pixels will have better dynamic range.

I'm trying to track down an animated GIF I made of 'photon rain' onto a small and big sensor.

Here you go :
http://www.farfoto.com/alan-rain-a-forever.gif

This was done specifically to refute Clark's picture, which shows a small pixel as having a shallow well.
http://www.clarkvision.com/imagedetail/does.pixel.size.matter/
http://www.clarkvision.com/imagedetail/does.pixel.size.matter/photon-rain.gif
 
Thanks for the responses and contributions, everyone.

You are getting wrong end of the stick, I don't think I have contradicted any part of Daniel's post, I am re-stating the fact that big pixels are less noisier than small pixels.

I think that is a contradiction with my post, or at least a contradiction in vocabulary.

Why? I personally believe this is a more important issue for DPs given high compression in camera and distribution than the benefits of smaller pixels.

Agreed. Noise is an important issue for DPs.

This puzzle could be described as "noise versus resolving power".

Small pixels only have higher noise at levels of detail unattainable by big pixels. But at the same level of detail, they have the same level of noise. Therefore I think a more correct description is "noise at Nyquist versus resolving power", which is a way of saying "noise per detail stays the same" or "smaller details have higher noise and large details have the same noise".

What matters is the noise that the actual viewer would see (i.e. for a given acceptably sharp CoC). In that case, large pixels do not actually have lower noise.

It's only in "noise at Nyquist" where large pixels have lower noise, and that's only because they have a lower Nyquist (lower level of detail).

From Emil Martinec professor of physics university of chicago

Not just any professor of physics, but one of the guys who got String Theory started. And just like String Theory, he has again found that the model everyone uses for understanding something (100% crop image analysis) can be improved upon greatly (spatial frequency referred image analysis). He's the one that explained it to me.

His idea is not quite as big as String Theory, but pretty good for someone who only does bird photography as a hobby.

Emil said:
...the read noise [at high ISO] of the larger pixels is smaller on a per area basis.

John Beale posted a similar quote above, to which I already responded, but I'll repeat it again here. Emil is correct, and I will explain how it applies to the discussion. First of all, very low read noise at high analog gain is only applies to certain pixel designs (mostly still cameras); many cameras, such as RED ONE, MFDB, and most non-CMOS cameras don't have that type of design. Second, it's not applicable in all, or even most, situations, because most people only do very low light shots (ISO 1600+) once and a while, and some never do it at all. Third, there is plenty of evidence that such designs will continue to apply to smaller and smaller pixel sizes, and none to the contrary.

DXO Labs
"For a given exposure time...

Your quote is from this page:

Contrary to conventional wisdom, higher resolution actually compensates for noise.

And it is referring to noise at Nyquist (100% crop). Two paragraphs below your quote:

DXO Labs said:
However, the four high-resolution neighboring pixels can be averaged out to form a low-resolution pixel. [...] The loss of resolution produces a better SNR.

That is how they choose to account for spatial frequency.

Lars kjellberg
QPcard - Dynamic range and pixel size
"Big pixels will give you cleaner, smoother*- less noisy*- shadow areas."

Looks like yet another example of image analysis that fails to account for spatial frequency. This is one of the only mistakes that is frequently made by experts in the field. They account for different sensor sizes (usually), they try for equal technology, and have equal (unbiased) expectations, and they apply the same raw preconditioning, raw conversion, and post processing. They even account for spatial frequency when comparing optics. But when it comes to sensors, it is forgotten.

It's failing to see the forest for the trees. It's the same as claiming that lens A with 80% MTF at 10 lp/mm is superior to lens B with 50% MTF at 100 lp/mm. One can't even say which lens is superior until the difference in spatial frequency is accounted for.

R N Clark

Roger has lots of great data and good analysis, but has also made many mistakes. Principally, he fails to account for difference in spatial frequency as well as sensor size. He is describing what the noise level is "at 100% crop", but that leads to the wrong conclusion about the noise level visible to actual viewer in real life.

David Frosini said:
The larger the pixel area the larger the well capacity, and the larger the charge capacity and the higher the signal level. A pixel is saturated when further increases in illumination do not relate a corresponding increase in signal voltage. Larger pixel areas help improve dynamic range because they hold more charge, so brighter objects do not saturate the pixel.

This is all true when comparing a single large pixel vs a single small pixel, such as the comparison of a small sensor and a large sensor. But it does not apply to sensors of the *same* size, because in that case one must compare multiple small pixels with one large pixel (i.e. account for differences in spatial frequency). When that is done, one can see that brighter objects do not saturate smaller pixels; in fact, they have *more* detail in the saturated areas. This is demonstrated in the FZ50 image I posted above.

David Frosini said:
Because camera system noise is generally independent of pixel pixel area...

That's not suppored by actual cameras on the market, which show much lower read noise for smaller pixels, such as the LX3 I pointed out above.

Ting Chen et al said:
A small pixel size is desirable because it results in a smaller die size and/or higher spatial resolution; a large pixel size is desirable because it results in higher dynamic range and signal-to-noise ratio.

Their analysis is based on the assumption that images must always be used and compared at Nyquist. But in reality, images are not used and compared at arbitrary resolutions based on the sensor, but fixed spatial frequencies based on the resolution of the theater projector, or the media (DVD, Blu-Ray, etc.), or size of the print. In those real world situations, the small pixels are resampled to the same size as large pixels and have the same dynamic range and SNR.

You can easily find a dozen more statements about the superiority of large pixel sensors, even by experts, that fail to account for spatial frequency. But once you restrict your search to those that correctly resample the small pixels for comparison with large pixels, all that is left is just corner cases (e.g. high ISO read noise) and exceptions.

All that matters are the real world results, which is what the viewer sees for a given display size, not arbitrary intermediate measurements like noise per pixel.

Just for fun... how do you explain the fact that one sensor vastly outperforms another when they have exactly the same pixel size and are on the same size sensor? Pixel size alone tells you very little. Pixel design along with pixel size tells you a lot more. For example, the Mysterium sensor and Mysterium-X sensor both have 5.4 micron pixels, yet the X sensor is significantly better in performance. Why? Pixel design. The biggest breakthroughs in sensor design are coming from pixel design advancements.

Agreed.

I ran tests on both to see if there was any difference in the files, and I found that the 39MP back displayed higher noise in the shadow areas of a moody light scene, when compared to the 22MP.

Let me guess: you compared them on your computer, right? You would have found a much different result if you had compared actual prints. Many photographers, such as the Luminous Landscape editors, will write an entire article about the supposed "fact" that camera A has much less noise than camera B, then make one tiny mention about the fact that in actual prints (i.e. real world results), the noise is the same. They have the importance reversed. (FWIW, I've noticed that most good DPs, such as David Mullen, place more emphasis on how it looks in the actual theater rather than a 100% crop on a computer.)

To compare cameras for noise on the computer, one must resample the 39 MP image down to the same spatial frequency, such as 22 MP (setting aside the effect of the OLPF). One should use optimal resampling software as well, such as the lanczos in Irfanview or ImageMagick, and not the junky alias-prone bicubic derivitives in Photoshop. (The same thing applies to the resampling done before printing or within the printer driver itself. QImage has a good algorithm.)

I think to be sure, maybe someone can grab a Canon 1DS MKII, and a 1DS, and possibly a 1DS MKIII, and let's have a real test.

There's no contest. As Jim mentioned, technology improved greatly with the newer cameras, and their superior pixel design easily beats the older cameras. I don't the cameras mentioned, but I did post some other comparisons above (e.g. 50D vs 40D).
 
Let me guess: you compared them on your computer, right? You would have found a much different result if you had compared actual prints. Many photographers, such as the Luminous Landscape editors, will write an entire article about the supposed "fact" that camera A has much less noise than camera B, then make one tiny mention about the fact that in actual prints (i.e. real world results), the noise is the same. They have the importance reversed. (FWIW, I've noticed that most good DPs, such as David Mullen, place more emphasis on how it looks in the actual theater rather than a 100% crop on a computer.)

To compare cameras for noise on the computer, one must resample the 39 MP image down to the same spatial frequency, such as 22 MP (setting aside the effect of the OLPF). One should use optimal resampling software as well, such as the lanczos in Irfanview or ImageMagick, and not the junky alias-prone bicubic derivitives in Photoshop. (The same thing applies to the resampling done before printing or within the printer driver itself. QImage has a good algorithm.)



There's no contest. As Jim mentioned, technology improved greatly with the newer cameras, and their superior pixel design easily beats the older cameras. I don't the cameras mentioned, but I did post some other comparisons above (e.g. 50D vs 40D).


Yes, my comparison was done on a computer, at the time a Mac G5, with two Eizo CG 210 calibrated monitors, and with these two images side by side, there was a marked difference. Now I'm no scientist, but I see with my eyes, and even a print on a color managed (profiled) Epson 9800, I would have seen some of the noise I viewed on screen. In print at viewable distance (depending on print size), I might not see it as much, and I guess that is your point, but I was pixel peeping, and in doing so, whomp there it is.

The two digital backs in question were produced at least a year and a half apart, so the advantage should have gone to the newer device.
 
Last edited:
with these two images side by side, there was a marked difference.

Side by side on the monitor is fine as long as both images have the same content. 100% causes one to have a headshot and the other to have an extreme close up; that makes the comparison unequal. They should both be cropped by the same amount, so that both are headshots, or both are ECU.

Now I'm no scientist, but I see with my eyes, and even a print on a color managed (profiled) Epson 9800, I would have seen some of the noise I viewed on screen.

I think you're wrong. 100% crop only gives you an accurate idea of what the print will be like if you always assume that higher MP = larger print. To simulate what it looks like at the *same* size requires that both images have the same content.

39 MP is a 33% increase in linear resolution over 22 MP, so 100% crop of both cameras simulates the following:

Make a 40x30 print from the 22 MP image and crop out 3 inches from the center.
Make a 53.3x40 print from the 39 MP image and crop out 3 inches from the center.

On the surface of it, 100% crop seems like a fair comparison because in either case you're looking at the same number of pixels on your screen. But when you compare how that relates to an actual print, you realize that you're simulating a much larger print size on the 39 MP image.

If you print them at the same size, and/or crop out the same portion of the image, it becomes clear that noise, too, is the same.

Take a peak at this comparison (linked in my OP):

http://www.pbase.com/jps_photo/image/100092629/original.jpg

The image on the right has less noise. To me, it's very plain even when used without any downsampling, but if you have trouble seeing it that way, you can look at the inset, where it was downsampled to the same (low) resolution as the image on the left.

In the same way, I bet the 39 MP image was superior even without downsampling, but for folks who disagree, it's always possible to resample the 39 MP down to 22 MP, just like the inset, to get the results they desire.

I was pixel peeping, and in doing so, whomp there it is.

Pixel peeping means cropping more out of the 39 MP image, so that it's 33% smaller (i.e. cropping a headshot to an extreme close up). So it would be correct to say that one cannot crop 33% more from the the 39 MP camera and still have the same noise. It only has the same noise if it is cropped the same as the 22 MP camera.
 
Daniel,

What are you REALLY telling me? OK, so what was I seeing, and why did it look not as good as the higher MP back? The backs were rated at 400 ISO, and under strobe and HMI lighting (I wish I had the files), I did increase the size of the 22MP back, but I used proprietary conversion software to upres the file from RAW. The client wanted to see it on the same scale as the 39MP for make a comparison. In any case, I appreciate this thread Daniel.
 
Last edited:
OK, so what was I seeing?

You saw that a 39 MP image cropped to 22 MP has worse noise than an uncropped 22 MP image.

And why did it look not as good as the higher MP back?

Because when you crop one camera more than another, you are placing much higher expectations on it. An equal comparison must crop both cameras by the same amount and display them at the same size. If you had done that, you would have seen that noise was the same in both.

EDIT: I posted before I saw your edit:

I did increase the size of the 22MP back, but I used proprietary conversion software to upres the file from RAW. The client wanted to see it on the same scale as the 39MP for make a comparison.

That's an important way to compare them, but you should also compare the 39 MP downsampled to 22 MP.
 
I actually did not crop the 39MP, I outputted the 22MP file in the capture software to come close to the 39MP.

Thanks for clarifying that. I had assumed you used the typical 100% crop method; therefore those comments don't apply.

But what do you think about the comparison image I posted above? Some people look at that and think the image on the right has more noise. Perhaps that's what went wrong with your 39 MP vs 22 MP comparison. For people who have trouble seeing that it actually has less noise, it helps to downsample them to the same resolution (as done in the inset) where no one can possible claim the right image still has more noise.

EDIT: After all, the 22 MP image was upsampled to 39 MP. You could also downsample the 39 MP to 22 MP so that it has the same noise, then upsample it back to 39 MP. It's just a crude method of noise reduction. Similarly, large pixels (e.g. 22 MP) are just a crude method of noise reduction. The noise is there at fine frequencies. Whether you hide the noise by using large pixels, noise reduction, or resampling, it's still there. The 39 MP camera just gives you the choice of seeing that noise or not.
 
The sample you have came from which two cameras? I total understand, what you are saying, and you could be a little right, it's just when looking into the shadow area of these images (the ones I produced), the validity of the larger MP file lost ground with me.
 
The sample you have came from which two cameras?

The 400D and FZ50. They have very different pixel sizes (4 sqare microns vs. 32.5 square microns). The sensor size difference was accounted for by selection of focal length and crop. Raw processing was completely equal.

I total understand, what you are saying, and you could be a little right, it's just when looking into the shadow area of these images (the ones I produced), the validity of the larger MP file lost ground with me.

As long as the "performance per detail" (noise power per spatial frequency) is the same, higher resolutions don't look any noisier in actual prints or displays. Some people look at upsampled comparisons (such as the one I posted and the one you did for your client) on a monitor and think the small pixels have worse noise. I don't see it that way at all and I firmly believe those same folks wouldn't see it that way in an actual print/display, so I think it's just some kind of issue related to monitor viewing. In any case, one can always downsample to get the same noise level for display on a monitor. So while the higher-MP camera may not be better (especially if one must downsample), at the very least it is not worse.
 
To be fair Daniel, we need to see results of two pro cameras. I think using two low end consumer cameras while showing your point, don't have the quality to start with that cameras on our level have.
 
To be fair Daniel, we need to see results of two pro cameras.

That position is fine with me. It's difficult to do such comparisons because professional cameras from the same year tend to have about the same pixel size. The reason is that there is a very limited amount of processing power / storage space in the camera and computer. MFDB are very slow even at just 20-60 MP, imagine how slow they would be if they had 2-micron pixels (1000 MP+). (RED ONE allows a very small pixel size by using smart compression and things like half-high demosaic to get around the computer limitations).

I think using two low end consumer cameras while showing your point, don't have the quality to start with that cameras on our level have.

I kindly disagree. If anything, it's unfair to the pro gear because modern consumer cameras like the LX3 are such much better and have way higher technology. The pro gear is only better because it has sensors with 10 to 50 times more area. If you take away the size advantage, the low end sensors have higher sensitivity (QE), higher dynamic range, less read noise, more color depth, etc.
 
Back
Top