The basic idea is like taking a high-res, low-framerate video and summing the frames into a buffer instead of encoding them into a video stream:
Some general notes
1) CHDK currently has essentially zero control of the video recording process.
2a) How much of the process is done by the ARM side of Digic, the custom DSP side Digic, or the sensor+readout hardware is not well understood.
2b) However, given the low performance of the ARM side, it's clear that much of the heavy lifting is done elsewhere.
3) Very little is known about controlling the DSP side. This includes how much of the process is programmable vs fixed function hardware. srsa has some relevant notes
http://chdk.wikia.com/wiki/User:Srsa_4c4) Frame rates and resolutions are likely defined by the sensor and readout hardware, meaning you won't be able to change them arbitrarily.
1) Allocate a 16- or 24-bit-per-pixel buffer the same size as the sensor, and zero it out
In video modes, full frame sensor data is never available. Video is done by special readout mode that doesn't read out the full sensor into main memory. Given the memory bandwidth available, it is certain that a full frame raw is never transferred to main memory for video.
There is unlikely to be enough memory to have an extra full copy of the raw buffer at 16 or 24 bpp. The native raw is pack bayer RGB 10/12/14 bpp depending on the camera. Anything that processes the whole buffer in ARM code will take a substantial fraction of a second at a minimum.
Known framebuffers and their formats are described in
http://chdk.wikia.com/wiki/Frame_buffers Start reading scanlines from the sensor, adding each pixel onto the value already in the buffer for that position
Sensor readout is not under our control, and AFAIK is mostly defined by hardware.
Open the shutter
The mechanical shutter is generally only closed after the exposure, to prevent further charge accumulating during readout. The "shutter opening" is electronic/
This description of CCD readout may be useful
http://learn.hamamatsu.com/articles/readoutandframerates.html (CMOS are a bit different, I'm sure google will find you the gory details of that if you are interested)
- the code that reads the stills data from the sensor runs on the DSP and stores the data for the entire frame directly into a buffer in RAM,
- the sensor's video modes are limited and none are full-resolution,
Correct
- and that anything to do with modifying the DSP is practically impossible.
Currently correct, but unknown whether it is inherently impossible (i.e. very limited fixed function pipelines) or could be overcome with sufficient reverse engineering.
Are there any ways to get access to individual scanlines
No. This is likely impossible due to hardware.
I have time to work on this if it's not a complete dead-end.
How much experience do you have with
1) C
2) ARM assembler
3) Reverse engineering
If you have a decent amount of experience with the above, I'd suggest taking some time to look through the CHDK code and some firmware dumps to get an idea of how things work. Speculation without a reasonably firm grasp of what is actually there is not productive. Some background reading about digital imaging hardware would also probably be helpful to understand what is theoretically possible.
FWIW, it seems to me that most of your idea could be implemented by shooting fast exposures in raw and summing them later into a higher bit depth format. The frame rate would be very low, of course.