On-the-fly sensor readout while shutter is open - Feature Requests - CHDK Forum

On-the-fly sensor readout while shutter is open

  • 8 Replies
  • 8576 Views
On-the-fly sensor readout while shutter is open
« on: 26 / March / 2013, 09:59:57 »
Advertisements
I've wanted a camera feature for a long time which I think may be possible with CHDK: arbitrary-length exposures without blowing highlights. The basic idea is like taking a high-res, low-framerate video and summing the frames into a buffer instead of encoding them into a video stream:

  • Allocate a 16- or 24-bit-per-pixel buffer the same size as the sensor, and zero it out
  • Start reading scanlines from the sensor, adding each pixel onto the value already in the buffer for that position
  • Open the shutter
  • Continue reading scanlines until enough time has elapsed, looping back to the first scanline each time the end is reached
  • close the shutter
  • read each scanline one more time
  • store the buffer as a high-bit-depth RAW

This method has the advantage that each pixel is only exposed for the time it takes to loop through all the scanlines, then it's read out and reset, so hopefully it never saturates. Another advantage is better dynamic range, since the shadows can be exposed for as long as you need. A disadvantage is that there would be more read noise. Too much read noise? Who knows, that's why there's an H in CHDK!

Reading some previous related threads, it seems that
  • the code that reads the stills data from the sensor runs on the DSP and stores the data for the entire frame directly into a buffer in RAM,
  • the sensor's video modes are limited and none are full-resolution,
  • and that anything to do with modifying the DSP is practically impossible.

Is this correct? Are there any ways to get access to individual scanlines, and avoid operating the shutter?

I have time to work on this if it's not a complete dead-end.

*

Offline reyalp

  • ******
  • 14126
Re: On-the-fly sensor readout while shutter is open
« Reply #1 on: 26 / March / 2013, 16:45:51 »
The basic idea is like taking a high-res, low-framerate video and summing the frames into a buffer instead of encoding them into a video stream:
Some general notes
1) CHDK currently has essentially zero control of the video recording process.
2a) How much of the process is done by the ARM side of Digic, the custom DSP side Digic, or the sensor+readout hardware is not well understood.
2b) However, given the low performance of the ARM side, it's clear that much of the heavy lifting is done elsewhere.
3) Very little is known about controlling the DSP side. This includes how much of the process is programmable vs fixed function hardware. srsa has some relevant notes http://chdk.wikia.com/wiki/User:Srsa_4c
4) Frame rates and resolutions are likely defined by the sensor and readout hardware, meaning you won't be able to change them arbitrarily.
Quote
1) Allocate a 16- or 24-bit-per-pixel buffer the same size as the sensor, and zero it out
In video modes, full frame sensor data is never available. Video is done by special readout mode that doesn't read out the full sensor into main memory. Given the memory bandwidth available, it is certain that a full frame raw is never transferred to main memory for video.

There is unlikely to be enough memory to have an extra full copy of the raw buffer at 16 or 24 bpp. The native raw is pack bayer RGB 10/12/14 bpp depending on the camera. Anything that processes the whole buffer in ARM code will take a substantial fraction of a second at a minimum.

Known framebuffers and their formats are described in http://chdk.wikia.com/wiki/Frame_buffers
Quote
Start reading scanlines from the sensor, adding each pixel onto the value already in the buffer for that position
Sensor readout is not under our control, and AFAIK is mostly defined by hardware.

Quote
Open the shutter
The mechanical shutter is generally only closed after the exposure, to prevent further charge accumulating during readout. The "shutter opening" is electronic/

This description of CCD readout may be useful http://learn.hamamatsu.com/articles/readoutandframerates.html  (CMOS are a bit different, I'm sure google will find you the gory details of that if you are interested)

Quote
-  the code that reads the stills data from the sensor runs on the DSP and stores the data for the entire frame directly into a buffer in RAM,
- the sensor's video modes are limited and none are full-resolution,
Correct
Quote
- and that anything to do with modifying the DSP is practically impossible.
Currently correct, but unknown whether it is inherently impossible (i.e. very limited fixed function pipelines) or could be overcome with sufficient reverse engineering.
Quote
Are there any ways to get access to individual scanlines
No. This is likely impossible due to hardware.
Quote
I have time to work on this if it's not a complete dead-end.
How much experience do you have with
1) C
2) ARM assembler
3) Reverse engineering
If you have a decent amount of experience with the above, I'd suggest taking some time to look through the CHDK code and some firmware dumps to get an idea of how things work. Speculation without a reasonably firm grasp of what is actually there is not productive. Some background reading about digital imaging hardware would also probably be helpful to understand what is theoretically possible.

FWIW, it seems to me that most of your idea could be implemented by shooting fast exposures in raw and summing them later into a higher bit depth format. The frame rate would be very low, of course.
« Last Edit: 28 / March / 2013, 02:08:05 by reyalp »
Don't forget what the H stands for.

Re: On-the-fly sensor readout while shutter is open
« Reply #2 on: 27 / March / 2013, 04:58:18 »
Quote
How much experience do you have with
1) C
2) ARM assembler
3) Reverse engineering
If you have a decent amount of experience with the above, I'd suggest taking some time to look through the CHDK code and some firmware dumps to get an idea of how things work. Speculation without a reasonably firm grasp of what is actually there is not productive. Some background reading about digital imaging hardware would also probably be helpful to understand what is theoretically possible.

1) I have some experience with C (on x86, PIC, ARM, and a few others, mostly robot controllers)
2) None with ARM assembler, but some with x86
3) Software-side, some reverse engineering experience, but with the aid of a debugger.

I've already taken a look through the CHDK code, and it seemed like what I wanted to do was pretty far out there compared to the existing code, but I wanted to hear what someone more knowledgeable had to say before I spent the time experimenting myself. Thanks for your comments.

Quote
FWIW, it seems to me that most of your idea could be implemented by shooting fast exposures in raw and summing them later into a higher bit depth format. The frame rate would be very low, of course.

This is something I already do with my camera  :D  The problem is that, even in JPEG mode, with multi-hour exposures the card fills up.

I'm going to start experimenting with my camera and see if I can make a multi-exposure mode that doesn't save a new image with every frame. This would get me most of the functionality I need, with the main downside being shutter wear.

Re: On-the-fly sensor readout while shutter is open
« Reply #3 on: 27 / March / 2013, 09:38:11 »
I'm going to start experimenting with my camera and see if I can make a multi-exposure mode that doesn't save a new image with every frame. This would get me most of the functionality I need, with the main downside being shutter wear.
It seems like you might want to start by looking at the RAW code - especially RAW subtract. If you are willing to roll your own custom version then, while it would not be fast,  you could play games with the RAW buffers and your own buffer.  Different cameras have varying amounts of RAM but with enough space you could sum several buffers and occasionally spin the result off to the SD card.    You could even get fancy and maybe pseudo-multi-thread by halting the shooting process at "wait_until_remote_button_is_released()" until you have finished with the previous shot.

As a wild guess,  you might be able to get off shot every 10 seconds that way ?

And autodeleting any jpgs would not be too hard at that point  either.

All kinds of options once you fire up your compiler ...
Ported :   A1200    SD940   G10    Powershot N    G16

*

Offline reyalp

  • ******
  • 14126
Re: On-the-fly sensor readout while shutter is open
« Reply #4 on: 27 / March / 2013, 16:18:35 »
I'm going to start experimenting with my camera and see if I can make a multi-exposure mode that doesn't save a new image with every frame.
You could "stack" the images on camera (see the raw sum / raw average code etc for example) but this will be very slow. It's unlikely you will find sufficient free RAM to keep your summed image there, so you'd need to keep it on SD card and do the summing in chunks.

Waterwingz suggestion that you could steal alternate raw buffers might be worth looking into, but I suspect it won't work. The same address space may be used for other things outside the shooting process (the first "raw buffer" often occupies the same address spaces as the live view), and in any won't be large enough if your summed image has a greater bit depth than the native raw.
Quote
This would get me most of the functionality I need, with the main downside being shutter wear.
I suspect you won't get multiple readouts in the normal Canon shooting process. One place to look might be the dark frame process, since that does take a second exposure and readout. Another reason to investigate dark frame is that it appears digic does the subtraction directly in the readout process. If you could convince it to add instead of subtract, that might be interesting but most likely be limited to the native raw bit depth.

A more productive approach is likely be to take multiple normal shots and do something with the raw data each time. With the current trunk, jpegs will be saved but you can delete them or use a minimal size. The experimental remote capture branch does allow you to suppress jpeg saving if you implement the required task hook.

Speaking of remote capture, an alternate approach would be to use the PTP extension and use some external PTP capable device to handle the image processing for each shot. With remote capture of the raw, a netbook or raspberry pi would be much faster than doing it on camera.
Don't forget what the H stands for.

Re: On-the-fly sensor readout while shutter is open
« Reply #5 on: 27 / March / 2013, 23:10:39 »
Waterwingz suggestion that you could steal alternate raw buffers might be worth looking into, but I suspect it won't work. The same address space may be used for other things outside the shooting process (the first "raw buffer" often occupies the same address spaces as the live view), and in any won't be large enough if your summed image has a greater bit depth than the native raw.
That actually isn't quite what I was trying to say. 

I was more thinking that he could create a new 16 bit RAM buffer and sum up to sixteen 12 bit RAW buffers into that while filtering so as to not over expose any area.   The RAM buffer would take a lot of space so feasibility would depend on the Canon free space of the camera chosen,  the amount of CHDK code you can build out (small in these days of module) and careful juggling of the different available memory regions.

Its a "science project" at this point ... might as well have some fun with it.  You could just use the center 1/9th of the sensor if you run out of space for your RAM buffer.  Poor man's zoom but proof of concept - not to mention faster to process. 



Ported :   A1200    SD940   G10    Powershot N    G16

*

Offline philmoz

  • *****
  • 3450
    • Photos
Re: On-the-fly sensor readout while shutter is open
« Reply #6 on: 27 / March / 2013, 23:31:17 »
Waterwingz suggestion that you could steal alternate raw buffers might be worth looking into, but I suspect it won't work. The same address space may be used for other things outside the shooting process (the first "raw buffer" often occupies the same address spaces as the live view), and in any won't be large enough if your summed image has a greater bit depth than the native raw.
That actually isn't quite what I was trying to say. 

I was more thinking that he could create a new 16 bit RAM buffer and sum up to sixteen 12 bit RAW buffers into that while filtering so as to not over expose any area.   The RAM buffer would take a lot of space so feasibility would depend on the Canon free space of the camera chosen,  the amount of CHDK code you can build out (small in these days of module) and careful juggling of the different available memory regions.

I don't think there are any existing cameras you could do this on for the full RAW image size.

A 10MP camera with a 12bit sensor has a 15MB raw buffer. To create a 16bit raw image would require 20MB of memory.

The largest free heap memory I've seen is only 2.5MB.
I was able to get an 8MB EXMEM buffer on the G12 - anything beyond that caused problems (slow shooting, video issues etc).

Then there's the processing speed issue. Using the badpixel file creation as an example it takes a long time (10+ seconds) to process the entire raw buffer due to the way the pixels are packed into memory. And that's just reading and testing the values - merging and updating another buffer would take even longer.

Having said that, some recent cameras do in camera HDR from 3 full size images; but whether the Digic processor is used for this is unknown. It may be possible to hook into this HDR code on a supported camera; but this will require a lot of reverse engineering.

Phil.
CHDK ports:
  sx30is (1.00c, 1.00h, 1.00l, 1.00n & 1.00p)
  g12 (1.00c, 1.00e, 1.00f & 1.00g)
  sx130is (1.01d & 1.01f)
  ixus310hs (1.00a & 1.01a)
  sx40hs (1.00d, 1.00g & 1.00i)
  g1x (1.00e, 1.00f & 1.00g)
  g5x (1.00c, 1.01a, 1.01b)
  g7x2 (1.01a, 1.01b, 1.10b)

Re: On-the-fly sensor readout while shutter is open
« Reply #7 on: 27 / March / 2013, 23:53:06 »
I don't think there are any existing cameras you could do this on for the full RAW image size. A 10MP camera with a 12bit sensor has a 15MB raw buffer. To create a 16bit raw image would require 20MB of memory.
The largest free heap memory I've seen is only 2.5MB.I was able to get an 8MB EXMEM buffer on the G12 - anything beyond that caused problems (slow shooting, video issues etc).
As I typed my message I had a nagging feeling I should have done the math.  So that was why I suggested using the center 1-of-9 quadrand of the sensor (poor man's zoom).  That gets you down around 2M for the buffer.  Depending on what you are trying to do,  the resolution trade off vs the dynamic range might work.

Quote
Then there's the processing speed issue. Using the badpixel file creation as an example it takes a long time (10+ seconds) to process the entire raw buffer due to the way the pixels are packed into memory. And that's just reading and testing the values - merging and updating another buffer would take even longer.
So my guess about an entitlement speed of 10 seconds per shot was wishful thinking.  But again, the 1-of-9 compromise and starting the next shot prior to having finished processing the current shot would help a lot here.

Quote
Having said that, some recent cameras do in camera HDR from 3 full size images; but whether the Digic processor is used for this is unknown. It may be possible to hook into this HDR code on a supported camera; but this will require a lot of reverse engineering.
Played with the HDR function on the sx50.  A closer look on a rainy day at the underlying code might be fun.

Update : the 1-of-9 could be swapped for every 3rd pixel in the x&y directions.  I have a lot of great snapshots from my original 1M pixel camera.  Not a great trade off but not impossible - YMMV.
« Last Edit: 28 / March / 2013, 00:34:28 by waterwingz »
Ported :   A1200    SD940   G10    Powershot N    G16

Re: On-the-fly sensor readout while shutter is open
« Reply #8 on: 26 / January / 2018, 23:59:01 »
Old, but still interesting idea. You could analyse NxN sections, then do 3-short HDR but saving only the brightest and darkest sections to the limited RAM buffer, along with their locations and exposure. Worst case would be half bright sky and half dark ground, where you wouldn't have enough room for enough blocks of detail in either area, but still you could prioritize the subject square (such as a detected face).

Btw, I read a paper on a way to save a lot of memory in taking HDR pics, it saves only the modulus of areas, in other words, save only pixels which are properly exposed. The higher bits are recovered 99% with an algorithm. (This would only skip saving the exposure above, not many bits).

http://web.media.mit.edu/%7ehangzhao/papers/moduloUHDR.pdf
« Last Edit: 27 / January / 2018, 00:15:26 by jmac698 »

 

Related Topics


SimplePortal © 2008-2014, SimplePortal