'd say edge detection could be an alternative to FFT components focusing - edge detection is already built in CHDK so I'd say for a given scene the more edges per constant threshold equals sharper image, but I think that's not safe enough.
I think it's good enough if done properly. Remember what we're doing (well, trying to do) is C-AF: we want to keep the subject in focus, and do it fast.
Do we want to do it very accurately, too? I think not. We're working in MF mode, which means that if you have Safety MF activated (and you really should when using a homebrew algorithm like this), the camera will do the final (and still fast) focus adjustment to achieve accuracy when you half-press.
Of course, that doesn't mean the edge-detection based method I'm
currently using is best... heck, I'm just taking a horizontal derivative, surely doing the vertical one too, at the very least, would help.
I guess that first, the fastest (not most accurate) algorithm should be tried and benchmarked against canon. The current problem is focus hunting (perhaps if no focus is obtained the camera should fall back to a slower but more precise algorithm) and repeated lens positions - the algorithm should not try a focus position if it has been tried.
Well, the problems I'm not sure how to tackle here are: how do I know whether "no focus is obtained"? Again, it's either some threshold, or, I don't know.
And about not trying settings that have been already tried: right, but how do I know the subject hasn't changed (and thus previously-tried settings
should be tried again)?
Anyway, the document you linked seems like a very interesting read, although I don't have time to read it thoroughly right now. But all I had found were much more informal hints from forums (I don't remember the URLs, but basically the one interesting suggestion I'd found was to use the standard deviation of the derivative / edge-detected image).
just a notice, we should not work with viewfinder data, we should acces the liveview rawbuffer (same as the normal image buffer? ), so here we have horizontally the full resolution due sensor readout method... and sync with every frame.. so we need fast code Sad...
isn't possible that an external circuit/controller is responsable for AF?
I doubt it. In other threads, we discussed how the camera simply can't read the full sensor data when in live view mode, due to readout latency. Sure,
maybe one part of the sensor data (namely, the center) is read in full... yet, how would AiAF work like that? And why don't we have a full-resolution MF window? Probably because it's just not there.