One thing you can take advantage of is the construction of the CCD grid. However, using this trick also means you're limited to monocolour images.
The grid consists of red, green and blue sensors in the following pattern:
Basically, the CCD grid in cameras is usually built like this:
████████████████████████████████
████████████████████████████████
████████████████████████████████
████████████████████████████████
So, each pixel consists of two green subpixels, one red subpixel, and one blue subpixel. The raw information from these subpixels is then mixed into RGB values for each pixel:
████
████
As you can see, there are twice as many green sensors on the grid as there are red and blue. Each pixel's "green" value is thus sampled from the raw data values from TWO sensors, whereas the red and blue subpixel values only have ONE sensor.
Having two sample points to average means you are effectively getting twice as much light for the green channel sensors than red and blue, and it also means the averaging process will even out some of the random noise from individual sensors.
What this means for photography with digital cameras is that the red and blue channels have about twice as much noise as the green channel. So, if you don't mind losing the colour data of red and blue channels, you can use UFRaw (and probably other programs too) to desaturate the image to black and white with zero intensity for red/blue channels and 100 for green.
Of course there are drawbacks to this method too. Since the sensors in the camera's grid are tuned to different wavelength peaks for R/G/B respectively, shutting down red and blue sensors means you will also lose part of the light that comes through the objective; in case of white light (even spectral distribution) you lose about third.
However, as a secondary effect, colour aberration from the camera's optics will become nearly nonexistent.
Obviously, there are quite limited situations where you might want to use this kind of trickery. The main field of application I am thinking of is, obviously, astrophotography. Both the colour aberration and hardware noise are bad things to have when you are photographing stars, so taking advantage of the green channel's superior sample rate (and thus higher SNR) might be worth at least a try - and since this is all post-processing stuff, you'll still have the RAW data of the red and blue channels available if you decide you want to use them also.
For the sake of relevance, actual astrophotography uses exclusively monochrome CCD sensors with wide wavelength band. These are used to take multiple exposures through different types of filters, and these exposures are then used in post-processing to combine into either false colour images with scientifically interesting spectrum, or true colour images, depending on what is desired.
For example, the HST (Hubble Space Telescope) images have a palette where three exposures from narrow bandwidth filters for S II (ionized sulphur emission centered around 672.4 nanometres wavelength) for red channel, Hα (ionized hydrogen emission, 656.3 nm) for green channel, and O III (double ionized oxygen, two bands at 495.9 nm and 500.7 nm) for blue channel.
This is used because it's more useful for astronomers to identify different elements based on the colour, rather than what human eye sees them as; in reality, both the S II and Hα wavelengths are red colours, while the O III wavelength is green.
But I digress.
As a relevant question, I would like to ask if there are any utilities around capable of looking at the RAW data from CHDK (either CRW or DNG), and instead of putting it through a demosaicing process, would be able to output a true greyscale image with each sub-pixel acting as an actual pixel.
What this would mean is, effectively, you could essentially quadruple the pixel count of your camera, which could be quite handy for taking superfine resolution images of certain types of objects.
Like the Moon, for example.