Hi,
I've been away and so couldn't respond to your post right away.
RGB space is NOT linear. The sensor space is. "Linear" means that the response of the sensor is proportional to the intensity of the light that falls on it. A conversion is applied between the sensor data and RGB colorspace during development of the raw data. The conversion uses a gamma of about .45 which is close to a square root (.5). I.e. RGB = sensor ** gamma.
This conversion magnifies the noise in the dark areas. You can see this in many ways.
First calculus shows that
dy = dx * gamma * 1/(x**(gamma-1))
Since 1/(x**(gamma-1)) is a decreasing function of x the output noise will decrease as x increases.
Second my example shows that the first half of the sensor range is mapped into the RGB range of 0 to .707. So noise of a certain intensity gets magnified inside of that range compared with the same size noise in the second half, which gets mapped to .707 to 1.
Jon
I'm not sure of this, but Canon may deliberately overexpose their images a little on P&S cameras.
That's quite possible.
There is one thing, though... viewpix is saying that if less exposure is forced, the result is underexposed.
So it seems he just can't attain correct exposure.
That might mean that the picture itself is taken at the correct exposure, but that the JPEG processing "fakes" overexposure by mapping bright tones to white -- or alternatively, that the picture is indeed overexposed, but then the dark tones are darkened even more during JPEG processing.
This is not dissimilar from what you say below.
Have you heard of "shooting right"? The camera sensors collect data in linear space then a non-linear mapping (like a square root) is applied to go to RGB space.
Just a little nitpick... I believe that an "RGB space" may very well be linear; it's just that the gamma function used is not.
And that would be because monitors and printed are generally non-linear (1.8 to 2.5 gamma in sRGB), and common (though debated) practice is doing the gamma adjustments in the picture itself rather than in the picture viewer or operating system.
Because of this non-linearity there is more information in the bright parts of the image than in the dark parts. So overexposing a little can help to reduce noise in dark areas.
You mean more information squashed in the bright parts, don't you? (See below...)
But there is more! A CCD sensor has a certain amount of noise (roughly the one captured by a dark frame) that doesn't depend on pixel brightness.
That means (or at least may mean, but I believe that to be the case) that the noise part, when considered as a percentage of the total pixel brightness, is higher in the darker pixels!
Why would we treat noise as a percentage of brightness? Well, because our eye sees it that way. Given a fixed absolute amount of noise, it'll be more noticeable on dark areas than bright ones.
From this follows that if you want some visible detail in the dark areas, you'll have to expose them longer than to obtain the same amount of visible detail in the bright areas, I think.
This might be what the Canon software is trying to achieve. Note that it's perhaps a mistake to think "the photo is overexposed and NOT underexposed" just because bright areas are totally white, while the dark areas are NOT totally black.
Even if they're not totally black, what you see may be 100% noise (or close enough to the noise threshold anyway), so, effectively, they'd be underexposed.
And that's a picture that, for practical purposes, is both over and underexposed.
To see this consider that the square root of .5 is .707. So half of the dynamic range of the sensor (0 to .5) is mapped to 0 to .707 and the other half goes from .707 to 1.
This would mean giving the dark areas more detail, right? (And this is indeed much like the typical gamma correction function, as far as I know).
So you end up with wasted bits (all noise) in the dark tones, and with possibly too few bits left to encode bright tones (although this might be good, since our eyes are more demanding with the dark tones).
That's why I said "squashing" information before.
So again if you want to actually fill up the dark bits with information, you expose for longer.
My conclusion would be that if you just want to have the full sensor information with no surprise, you need the full 10 or 16 bits contained in the RAW (and encoded linearily). Then apply (or don't) your own gamma correction as desired.
Otherwise, playing with the "Contrast" setting in "My colors" might (or might not) result in changing the JPEG curves to give you more bits for the bright tones (and then step exposure down as needed when shooting).
Or still, you could use the CHDK build (which I believe exists) that lets you apply custom curves during RAW to JPEG conversation, which should be equivalent to - but maybe speedier than - doing RAW post-processing on a computer.