How to Shoot Virtually Noise Free RAWs :) - page 2 - RAW Shooting and Processing - CHDK Forum supplierdeeply

How to Shoot Virtually Noise Free RAWs :)

  • 18 Replies
  • 9369 Views
*

Offline PS

  • ***
  • 157
  • A610 1.00f
Re: How to Shoot Virtually Noise Free RAWs :)
« Reply #10 on: 19 / January / 2012, 14:08:22 »
Advertisements
The metering in auto mode, that you use, with tendency to underexpose (not ETTR) is what you refer as 'correct'. Consequently, what you mean 'to overexpose' is actually to correctly expose the image :)

Jpeg clipping is irrelevant.

Why bother with smaller copies – you can fit image in viewer or increase viewing distance.
« Last Edit: 19 / January / 2012, 16:56:31 by PS »

Re: How to Shoot Virtually Noise Free RAWs :)
« Reply #11 on: 19 / January / 2012, 17:28:02 »
I use aperture priority mode (Av) 95% of the time. The samples I posted were in manual mode.
In general, we are agreeing on the topic. Expose the image as brightly as possible without blowing any of the highlights to white. :)


Re: How to Shoot Virtually Noise Free RAWs :)
« Reply #12 on: 30 / January / 2012, 06:10:08 »
Hey, thanks for this . :) I'm going to give this a try :)

Re: How to Shoot Virtually Noise Free RAWs :)
« Reply #13 on: 31 / January / 2012, 07:34:11 »
One thing you can take advantage of is the construction of the CCD grid. However, using this trick also means you're limited to monocolour images.

The grid consists of red, green and blue sensors in the following pattern:

Basically, the CCD grid in cameras is usually built like this:










So, each pixel consists of two green subpixels, one red subpixel, and one blue subpixel. The raw information from these subpixels is then mixed into RGB values for each pixel:





As you can see, there are twice as many green sensors on the grid as there are red and blue. Each pixel's "green" value is thus sampled from the raw data values from TWO sensors, whereas the red and blue subpixel values only have ONE sensor.

Having two sample points to average means you are effectively getting twice as much light for the green channel sensors than red and blue, and it also means the averaging process will even out some of the random noise from individual sensors.

What this means for photography with digital cameras is that the red and blue channels have about twice as much noise as the green channel. So, if you don't mind losing the colour data of red and blue channels, you can use UFRaw (and probably other programs too) to desaturate the image to black and white with zero intensity for red/blue channels and 100 for green.

Of course there are drawbacks to this method too. Since the sensors in the camera's grid are tuned to different wavelength peaks for R/G/B respectively, shutting down red and blue sensors means you will also lose part of the light that comes through the objective; in case of white light (even spectral distribution) you lose about third.

However, as a secondary effect, colour aberration from the camera's optics will become nearly nonexistent.

Obviously, there are quite limited situations where you might want to use this kind of trickery. The main field of application I am thinking of is, obviously, astrophotography. Both the colour aberration and hardware noise are bad things to have when you are photographing stars, so taking advantage of the green channel's superior sample rate (and thus higher SNR) might be worth at least a try - and since this is all post-processing stuff, you'll still have the RAW data of the red and blue channels available if you decide you want to use them also.


For the sake of relevance, actual astrophotography uses exclusively monochrome CCD sensors with wide wavelength band. These are used to take multiple exposures through different types of filters, and these exposures are then used in post-processing to combine into either false colour images with scientifically interesting spectrum, or true colour images, depending on what is desired.

For example, the HST (Hubble Space Telescope) images have a palette where three exposures from narrow bandwidth filters for S II (ionized sulphur emission centered around 672.4 nanometres wavelength) for red channel, Hα (ionized hydrogen emission, 656.3 nm) for green channel, and O III (double ionized oxygen, two bands at 495.9 nm and 500.7 nm) for blue channel.

This is used because it's more useful for astronomers to identify different elements based on the colour, rather than what human eye sees them as; in reality, both the S II and Hα wavelengths are red colours, while the O III wavelength is green.

But I digress.


As a relevant question, I would like to ask if there are any utilities around capable of looking at the RAW data from CHDK (either CRW or DNG), and instead of putting it through a demosaicing process, would be able to output a true greyscale image with each sub-pixel acting as an actual pixel.

What this would mean is, effectively, you could essentially quadruple the pixel count of your camera, which could be quite handy for taking superfine resolution images of certain types of objects.


Like the Moon, for example.


Re: How to Shoot Virtually Noise Free RAWs :)
« Reply #14 on: 31 / January / 2012, 08:29:50 »
As far as I was reading on the topic, the Bayer sensor doesn't have a four filters for each pixel (one pixel is not made of 4 subpixels) but every one pixel is red or green or blue (or green once again). So four neighbours are for example:
Code: [Select]
     A         B
1   red      green
2   green    blue
and they are 4 independent pixels. So on 12Mpx matrix there are 6Mpx green, 3Mpx red and 3Mpx blue.

In the above example RGB value of pixel 1A is calculated from it's own value (R) and interpolations of its neighbours (for G and B colors). Thus there's no simply way to quadruple pixel count.

Of course if my understanding of Bayer idea is correct ;)
if (2*b || !2*b) {
    cout<<question
}

Compile error: poor Yorick

Re: How to Shoot Virtually Noise Free RAWs :)
« Reply #15 on: 31 / January / 2012, 11:55:47 »
I understood it the same way you did Outslider.

So on 12Mpx matrix there are 6Mpx green, 3Mpx red and 3Mpx blue.

So in reality, a 12Mpx camera with a Bayer sensor only has 3Mpx of full colour pixels. However, color data is not the only data that exists. Each pixel has brightness information, and that is why the Bayer interpolation gives us a relatively good guess of what that pixel would look like.

Re: How to Shoot Virtually Noise Free RAWs :)
« Reply #16 on: 31 / January / 2012, 15:31:23 »
Ah, I have then misunderstood that part of the sensor layout and the interpolation/de-mosaicing process.


Nevertheless, it would be quite interesting to have an utility to create an image directly from the raw data without de-mosaicing it. I can think of several ways where it would be useful.

Re: How to Shoot Virtually Noise Free RAWs :)
« Reply #17 on: 31 / January / 2012, 15:39:24 »
It is possible. You have pure data in raw:) Just read it and save as you like.

It *could* be even done on camera, but the process would take a *long* time (we have no direct access to the image processing through CHDK...).
if (2*b || !2*b) {
    cout<<question
}

Compile error: poor Yorick


*

Offline reyalp

  • ******
  • 14118
Re: How to Shoot Virtually Noise Free RAWs :)
« Reply #18 on: 31 / January / 2012, 16:22:37 »
Nevertheless, it would be quite interesting to have an utility to create an image directly from the raw data without de-mosaicing it. I can think of several ways where it would be useful.
The raw *is* an image directly created without de-mosaicing. So is DNG: most DNG software de-mosaics for you, but there's no reason it has to...

You can use try http://chdk.wikia.com/wiki/CHDK_Tools#rawconvert.c - note that this produces grayscale output, since it doesn't know the bayer pattern and producing r,g,b with two components always set to zero would be fairly pointless. It would be pretty easy to add an bayer pattern option and output individual files for the components.
Don't forget what the H stands for.

 

Related Topics


SimplePortal 2.3.6 © 2008-2014, SimplePortal