Viewport's dimension are irrelevant in this case, since you're working in windowed mode.
You can even use a 551 wide per 777 heigh viewport, FWIW, openGL won't care as long as you'r not in fullscreen mode.
You aren't following. In CHDK, we call the camera live view (view finder image or display of selected jpeg/avi in playback mode) the "viewport". This has specific fixed dimensions, aspect ratio etc, and is completely unrelated to the opengl concept of a viewport.
Sorry, I realize I wasn't explaining myself correctly.
My point is : OpenGL or DirectX device frame buffer can be instantiated with a 720x240 resolution, and recklessly be presented on a 333x999 surface (with deformation, that goes without saying). It's up to the graph
ic card's driver implementation to make it happen, or to OpenGL's software emulation, if your card's driver gives up on this kind of task (which should be quite rare these days).
Need an example : try some old DirectX 9.0c SDK tutorials : you can resize the presenting window without even having DirectX firing a
n event about frame buffer resizing.
But, if you present a pixel buffer directly to a rendering surface, that's another story : the presentation stage is no more transparent to you, since you affect the rendering surface directly, with your own pixel data.
Fwiw, as I said earlier in a post-scriptum, recent OpenGL developments added those "frame buffer objects", making the implementation of OpenGL client app totally device independent, unlike P-buffers.
"The filter was not designed for photographs, but for images with clear sharp edges, like line graphics or cartoon sprites.
It was also designed to be fast enough to process 256x256 images in real-time."
Aaaawww... Too bad then. I
t was a good alternative to nearest-neighbor scaling, without having to use floating-point arithmetic though.