One would appreciate from the client library to provide quick convolution filters on large stills. And GPU shaders are the faster way to achieve it nowadays.
That's why I plan on using GLSL for this: Shaders-processed filters for big stills.
So, your PC is connected (probably indoors) to a camera and via PTP and your code you have LiveView.
What then ?
I am just curious, is this an abstract programming excercise or does it have some practical use ?
Hahaaaaaa ! At last,
THE question.
me gusta !
The practical
advanced uses of CHDK ptp extensions are, i.m.o. , in the still vision application.
It all relies on two functionalities :
- Liveview, (or anything faster than a regular capture) is obviously used for human real-time control and scene preparation and/or rough alignment of the camera's target. It also can be used to achieve changes detection, in a quick integrator stage. The virtually low resolution and fast acquisitions are ideal for motion detection/scenery changes.
- Full-resolution captures allows differentiators (such as Sobel/Canny edge detection) to operate on a much more detailed source than regular CCTV devices or highly-priced webcams.
That being said, having these two functionalities reunited in a single device makes CHDKPTP a perfect stepping stone in order to create the poor man's high-end vision solution.
I can envision multiple application to it for myself, such as defect search on a machined workpiece, PCB
analysis, SMD components placement, etc. And that's only my area of expertise.
A biologist could use it for microbial cultures analysis, when population growth rates / environment interactions would require a recent camera's high resolution, but a PC's workload to do the stats.
It's all about enthusiasm, my fair Microfunguy.
And CHDKPTP gives me aplenty !
Camera can be in Record or Playback which will affect the palette.
In Playback, depending on the display option, you may only see a small image top left, you will not see shooting info or histograms.
On newer cameras, the bitmap overlay can have various formats and the displayed image can be in various resolutions.
In movie mode, on my A620 at least, the displayed image is just a distorted part of the full image.
Sooooo... you're saying the frames headers are only changing along with the camera's operating modes, or from one device to another. That's....pretty much expected, isn't it ?