thanks a lot for the responses! the spot meter function is already close to what I would like to do.
however, I want to take a picture and then analyze it instead of having live information (i.e. I want to control exposure time, etc.). I don't think that you can easily apply the procedure from the motion detect buffer to the raw buffer you get from the taken image (?)
it looks to me like it should be possible to modify the build_shot_histogram() function in the source shot_histogram.c to only take the red, green, or blue pixels into account. you could then just have one shot_histogram variable per color and do a slight modification on the shot_histogram_get_range to calculate the values.
to do this I need to understand how the get_raw_pixel function in raw.c works. I get the basic principle how it assembles the bits corresponding to one pixel but I don't understand how you know which pixel is red, green, or blue, and where on the picture the pixels are.
basically I just need to know which pixels I need to access to only have red, green, or blue ones. more specific questions would be:
- in get_raw_pixel, the pixel is defined by x and y. Am I correct in assuming that these are just the normal coordinates of the pixels?
- In the wiki page on frame buffers it says that the most common format is red-green-green-blue. what does this mean exactly? do the pixels in one row have the sequence RGGBRGGB...? should I be able to take always the same color by always increasing x in steps of 4? what is then the sequence of a column?
- why does shot_histogram use a step size in both x and y of 31?
- could somebody maybe explain how the address of each pixel is found in raw.c (code: addr=(unsigned char*)rawadr+y*camera_sensor.raw_rowlen+(x/8)*10)
sorry if some of the questions are very naïve; as I said, I am completely new to this.