To avoid blown out images there are different solutions:
- Some Fuji sensors have a secondary and smaller photodiode that is taken into account when the primary photodiode is out of range.
- Some new Ricoh cameras take several images with different exposures with one click and combine them in camera.
- With CHDK, bracketed images can be taken to make a HDR but you need a computer and HDRs can be a little artificial.
So what about a solution similar to Ricoh cameras for CHDK? I'm not sure if this problem is yet solved by "Auto DR" option in Curves. I've tested an idea of fusing three bracketed images without the problems of a HDR. I'm testing the idea in my computer but I want to implement it later in CHDK. So these are the steps now:
- I take three bracketed images in RAW format, for example, +-2EV.
- I have written a C program to take most pixels from the +2EV image, but omitting the blown out ones. For them, the program reads the 0EV image and, if again blown out, it uses the pixel info from -2EV image, the darkest one.
- The resulting image has a histogram with the same range as a 10+2+2=14 bits image, so my program rescales its range to 10 bits and save it as a normal 10 bits raw image.
- I copy that image to the camera card and get a jpeg with the RAW Develop option of CHDK.
Well, the main problem is in the last step because my rescaling also affects to the white balance. And "RAW Develop" seems to get the white balance from the scene you are shotting in order to develop your stored RAW, or from the selected white balance in the options (tungsten...). When I adequately "rescale" the white balance multipliers, the result is better, although in this example isn't very problematic.
Obviously, this isn't a problem for the people who develops their raws with a computer because they can select any white balance, but the aim is to be in camera and with the minimal problems for the user.
So the best solution would be to read the original white balance from the 0EV jpeg, but I don't know if this task is yet covered by CHDK or it's just easy to implement (needs more study). The other solution would be to take the bracketed images with a fixed white balance (cloudy, tungsten...) and then to apply a curve with the right factors.
By the other hand, the fused image has lost some contrast and show dark pixels darker and bright ones brighter (this can be fixed with curves).
Another example, now with a +-3 bracketing:
-> Wrong WB:
. Correct WB:
In this example the problem with the white balance is more evident because the blue multiplier is 4.439 instead of 1.xxx .
Finally, actual code uses powers and logarithms, but the future camera version could use precalculated arrays if bracketting is limited to a couple of cases: +-3, 2, 1.5. I think the easy way is possible (adding to the file browser an option similar to RAW sum or RAW average) but I don't know yet if it's possible to get all the process (bracketing, fusion and development) with a single click.