I've been struggling with trying to come up with a good model for this all week - one thing I'm unclear on is how does the logical viewport size relate to the OSD bitmap screen size (and ultimately the live image view) when it gets displayed on the PC?
We discussed this a bit in IRC, but I'll try to elaborate a bit.
The idea of the "logical size" is to tell the client how the valid viewport data (called "visible size" in my test code) relates to the physical screen on the camera. For example, if the viewport data has 120 lines, the client needs to know if this is a 120 line high window inside a 240 line high screen (like stitch), or a whether it is a full screen buffer (like digital zoom, or low res video)
This information was missing from the old information, which effectively assumed vid_get_viewport_max_height was always the full screen height, which is generally true on cameras like the g12, but not true everywhere.
The "logical size" is given in units of buffer pixels, because these are easily available, and convenient for calculation with other offsets etc. The "logical size" that represents a full screen camera display is not fixed. Full screen is indicated by logical size == visible size.
I've been thinking it might be clearer to define the "logical size" in terms of margins, so "logical height" equals top margin + visible height + bottom margin. The top/ left already exist as offsets, and most cases are symmetric.
The offsets are complicated by the fact that there are offsets within the buffer (e.g. 16:9 on G12) and offsets of the buffer data on the screen (e.g stitch, g1x).
The Y offset within the buffer can (and IMO should) go away, so the appropriate offset from the real buffer is calculated on the camera, and only the useful data is sent. But as mentioned earlier, we really can't do this with the X offset.
The final rendering is up to the client. Given the physical aspect ratio of the camera screen, the client can display any size window it wants. The "logical size", "visible size" and offsets tell it how to display the data relative to the window. The current chdkptp code (if correct aspect ratio is selected) fixes width at 1:1 or 1:2 viewport pixels and then scales height for aspect ratio, but this is just because it was convenient to write that way. Clients that want aspect ratio correct display will generally have to do arbitrary non-integer scaling anyway.
Re the bitmap:
As far as we know, the bitmap is always full screen, but the resolution doesn't have a fixed relationship to viewport. (e.g on a540 the visible bitmap data is always 352x240, but the viewport might be 704x240, 704x528, 352x240, 704x60....)
My current test code uses the same lv_framebuffer_desc structure as the viewport, but this is probably overkill.
The "logical sizes" given in the in the viewport and bitmap descriptions are each in their own units (buffer pixels of that buffer), but can be used to display correctly because be the logical size tells you how much of the camera screen each one covers.
Considerations for existing on camera viewport/bitmap code:
Unfortunately, the existing CHDK code isn't aware of a lot of these variations. For example, on cameras where digital zoom varies the size of the buffer, zebra, histogram and md will all be wrong when it is used.
On cameras like a540, the viewport and bitmap are assumed to be 360x240, (using every other Y value in the viewport) but in fact only 352x240 have valid data, which means the histogram will have some junk in it, and md might be spuriously triggered. MD also won't work correctly in video modes on these cameras.
There's is also quite a bit of cruft due to adapting newer cameras with different resolutions to CHDK units.
It would be nice to use the correct dimensions everywhere, but I'd rather leave that as a project for another day. If we get all the values for live view, that will give us a step in the right direction. The downside is we will get more function that return similar but not exactly equivalent values.
Ugh, that was long
