white black
iso max min max avg std max blk lvl
80 3969 0 48 .940 1.14
200 3967 0 49 1.60 2.26 4092 126
400 3964 0 103 2.78 4.03 4091 127
800 3960 0 119 5.16 7.38 4089 129
1600 3954 0 175 9.83 14.2 4084 131
3200 3941 0 409 18.3 27.1 4074 134
6400 3954 0 746 36.7 54.1 4095 143
What this table means
ISO - The ISO of the picture. Tv was 1/50s.
White - the max decoded white pixel, under a bright lamp. This is the max raw value minus the black level.
Min black - the darkest pixel found in the active area. When it reaches 0, it means the variance from black level is so extreme, that the pixel tries to go out of range. You should be careful of estimating any statistics in this condition as the distribution is cut off.
Max - the max pixel value with the lens cap on. We call this a dark frame.
Avg - the measured mean of the active area. This is not the same as the estimated mean of the distribution, in the case that the minimum value is zero, because the tail is cut off.
STD - the standard deviation. This is a measure of noise mathematically, but not how your vision perceives noise. What it means is how much +- from the mean, that 68% of the pixels will differ.
Max - the 2nd max is where I added the white max to the black level to find the raw sensor max. This should never exceed the bit depth of the sensor, so 4095 but a 12-bit sensor.
Blk Lvl - the black level as found by rawdigger. Probably just the mean of the masked area.
How to use this table
For example, averaging pixels through binning has a formula for how it reduces noise, as std/sqr(n). So if my std is 54.1, and I average 16 pixels, the new noise level is 54.1/4=13.5. Now you can look in the table and see that's the same level as ISO1600, so you can say more simply that binning of 4 superpixels reduces (mathematical) noise by 2 stops.
There is a slight effect that increasing ISO is less noisy than expected.
I'd rather suggest that we talk about this over at
https://chdk.setepontos.com/index.php?topic=12926.0because I've answered the question about binning.
About binning: my impression is that it's kind of a myth, people hear about it and it's a feature in some cameras that are supposed to do amazing things, so they are looking to replicate that benefit somehow. What I'm hoping to do with experiments is show that binning is nothing magical, it's almost the same as resizing any image in any software to half/quarter etc. of the size. This benefit is rather obvious, as merging any n pixels together in someway always reduces noise. But is this any better than wavelet, NL Means, or other famous algorithms? At the moment, I don't even see what benefit it has from doing it in hardware, except for no quantization errors in adding the pixels.