I've added detection for viewport_buffers and active_viewport_buffer to the DryOS sig finder.
I have one camera that will work with this (A1200) so I decided to try it out. Added some debugs to display to the LCD using the code at the bottom of main.c.
The first thing I found out is that the current version of
vid_get_viewport_live_fb() for the A1200 always returns 0 . Bad porting job - the MD code just defaults to
vid_get_viewport_fb().
So I added some code to the spytask to scan RAM from 0x1000 to 0x3000 looking for the address returned by
vid_get_viewport_fb() and found the three locations. That's pretty much what the A2200 code comments told me to expect. Added another debug and found that only one of those three seemed to change in with shooting mode active.
Success - I have a new
vid_get_viewport_live_fb() for the A1200 1.00c :
void *vid_get_viewport_live_fb() { // lifted from a2200 1.00b
return (void*)(void*)(*(int*)(0x20F8)); // Possible addresses (20F8, 214C, 28C0)
}
Not so good for the 1.00b version though as I don't have the camera so don't have an easy way to find the right value for that camera. I'll ping some of the beta testers and see if they will run some custom code for me I guess.
My next test was to compare my new
vid_get_viewport_live_fb() to the value the new sigfinder and G12 based code finds. Both routines turn out to rotate through the same four buffer addresses in shooting mode. So far so good.
However, the address returned by each routine at the same point in time is different. I added a debug to simply subtract the two values and display the result in real time. I get the seven difference values you would expect ( all multiples of 0x3f480 - I assume that's the buffer size). This indicates random sync differences between the two routines I think ( unless the buffer addresses change while I'm calculating the difference between them - seems unlikely).
So I have now have two ways to find
vid_get_viewport_live_fb() but they return different results (same buffer list - different buffer at any point in time). Not sure where to go next with this - just go for empirical testing and see which one comes out faster?
Update : just realized I should be able to modify the MD code to use both routines and then see which one is quicker to pick up on a change. Having reyalp's "flash the autofocus LED to trigger the test" might be something to look at now ....