CHDK PTP interface - page 73 - General Discussion and Assistance - CHDK Forum

CHDK PTP interface

  • 1241 Replies
  • 489864 Views
Re: CHDK PTP interface
« Reply #720 on: 29 / February / 2012, 12:29:34 »
Advertisements
There are many ways the LiveView frames can be handled.
For the record, are you looking at a solution that works for most users or one that works with your setup ?

They may not be the same.

Another thing, developers are disappointed by low frame-rates.
This is all great coding fun, but realistically why does the framerate matter, what will this feature be used for ?
« Last Edit: 29 / February / 2012, 12:32:55 by Microfunguy »

*

Online reyalp

  • ******
  • 14080
Re: CHDK PTP interface
« Reply #721 on: 29 / February / 2012, 12:46:38 »
Next thing, I'll bring myself to undersand the mechanics of the view stream structure (if I want to decode it).
It's in your ChdkPtp/lua sources,  izznit ?
The structures defining the live view data are in CHDK core/live_view.h. However... this is going to change because it doesn't adequately describe all the characteristics a client needs to display correctly in every case, so don't get too attached to it. This only describes the layout of the information, not the actual encoding of the framebuffers.

Quote
I'll try to implement a proper way to pass the data container's payload directly to a pixel shader (to get an easy-to-use bitmap output).
For this, you should mostly be worried about the frame buffer formats, described here http://chdk.wikia.com/wiki/Frame_buffers

The structure describing them (live_view.h) should be dealt with in normal code, not a GPU.

For live view, you need to take a U Y V Y Y Y and turn it into 2 or 4 RGB pixels. Whether 2 or 4 is preferred is one of the things that isn't adequately described, although with GL, you could just do 4 all the time and scale the resulting output when it comes time to render.

Additionally, the rows in the live view buffer may be wider than the number of valid pixels. E.g on a540, each buffer row is 720 pixels (=1080 bytes) wide, but only the first 704 contain valid data. It's also possible for the data to be offset, e.g. for letterboxed modes.

The data may be offest vertically within the buffer, but I am probably going to change the protocol so any top/bottom padding are not sent. X padding can't easily be handled this way, since it would require a lot of copying on the camera.

The bitmap buffer is 8 bit indexed. There are multiple palette formats, only one of which I've implemented support for so far. In general they will all by AYUV in some form, but the details vary. Note that several of these actually only use 16 palette entries, the 8 bit index is split into nibbles and the two resulting AYUV values are combined.  The frame buffers page has more info.

See chdkptp/yuvutil.c for an adequate (but not necessarily correct) implementation of the format used on a540 and probably most digic II/vxworks cams.
Quote
Will I use DirectX or OpenGL, I still cannot say. I'm a tadbit inclined to use SharpGL though  (i.e openGL).
If you use shaders that can be sent directly to opengl, it should be quite easy to use this in CHDKPTP, which has native support for GL canvases.
Don't forget what the H stands for.

*

Online reyalp

  • ******
  • 14080
Re: CHDK PTP interface
« Reply #722 on: 29 / February / 2012, 13:01:11 »
Another thing, developers are disappointed by low frame-rates.
This is all great coding fun, but realistically why does the framerate matter, what will this feature be used for ?
I think 10fps is perfectly adequate for real time control of the camera.

However, doing the rendering in opengl or similar gives you a lot of flexibility, and performance for free. Opengl is also very portable.
Don't forget what the H stands for.

Re: CHDK PTP interface
« Reply #723 on: 29 / February / 2012, 13:10:58 »
doing the rendering in opengl or similar gives you a lot of flexibility, and performance for free. Opengl is also very portable.

That is what I use.

However, I do not have shader support on my old PC that I use, not sure how many people do.

Also, my resizing from 720x240 to 320x240 is very crude  .. though perfectly adequate for what I use it for.

Not sure If I have to create a 1024x256 image if I want OpenGL to do the resizing.

That is a bit messy, slower than a simple copy of a continuous buffer area.

Also, not sure how common the graphics-card support is for the non-power-of-two functions (cannot remember which one offhand).

You never know when hardware or software rendering is being used.


Re: CHDK PTP interface
« Reply #724 on: 29 / February / 2012, 15:25:59 »

Not sure If I have to create a 1024x256 image if I want OpenGL to do the resizing.

That is a bit messy, slower than a simple copy of a continuous buffer area.
If it was only for resizing or color shifting or clamping purposes, one wouldn't even think of a pixel shader for such low resolutions.
The point in having an library or an API providing a shader pipeline relies on all the subsequent tortures you can apply to the frames, which can be fairly CPU-expensive.

Also, not sure how common the graphics-card support is for the non-power-of-two functions (cannot remember which one offhand).
By power-of-two, you're speaking about the graphic cards resolutions, right ?  :)
Well, even if you have to deal with non-power-of-two dimensions, you'll do like anyone would do when compressing a video in a codec which doesn't support it : cropping or letterboxing. One given, one taken, we have to make compromises.  ;)


Another thing, developers are disappointed by low frame-rates.
Higher framerate ? To do what ? Some kind of remote video recording ? One just have to buy a bigger SD card, or think about implementing a compatibility extension for SDHC up to 16GB then, cuz' the camera would do it just fine, in higher resolutions than the crappy viewport's 240 lines.
Or would it be to process it for vision ? An old PC's CPU wouldn't handle complex convolution processings without having to give the thread the real-time priority. Thus bye-bye USB synchronicity !

Anyway, its obviousl the old A-series USB peripheral won't handle higher bandwidths. Even with the padding bytes removed from the buffer, I don't think one can achieve more than 5fps anyway...with high hopes.

However, I do not have shader support on my old PC that I use, not sure how many people do.

Well, I bough this crappy mother board two or three years ago : http://www.asrock.com/mb/overview.asp?model=p4vm900-sata2, brand new, for less than 50 bucks, shipping included..and guess what : the crappy integrated GPU it bears handles Shaders 2.0. Just have a look at the specs.

So unless one's stingy enough (meaning more than me) to hope manipulating the viewfinder's stream using a computer made of juice boxes and toothpicks (but some expensive glue), he will eventually have to use (hear 'buy') a host machine who can deal with what he expects out of his Canon camera. At 2fps, at least.  :P

However, doing the rendering in opengl or similar gives you a lot of flexibility, and performance for free. Opengl is also very portable.
Gosh, it's so portable I remember making in run on Pocket PCs ! ( <-- re-edit spoiler : IT WAS A TWEAK, don't get your hopes too high !  :P )
I'm just not sure about the way it handles native compilation of HLSL against hardware that actually handles shaders higher than 2.0. I'll have to read more about it, since I fear OpenGL may relies on software shading too often. It's my ignorance talking here. (<-- PS  : Neverming, OpenGL have those new "frame buffer objects" now... all's well)
« Last Edit: 29 / February / 2012, 15:44:47 by asmodyne »

Re: CHDK PTP interface
« Reply #725 on: 29 / February / 2012, 15:38:28 »
By the way, did you ever heard of Maxim Stepin's hqNx enlarging filters ? It's wonderful !

Just sayin'...  :-*

Re: CHDK PTP interface
« Reply #726 on: 29 / February / 2012, 15:39:53 »
I was only thinking of shaders for directly rendering the Y411 viewfinder image.
(I have no idea how to do it and cannot test it anyway).

At the moment, I resize from 720x240 to 320x240 in 'C' code.
It is crude nearest neighbour.
I am not sure if interpolating will slow it down too much and with very little gain other than looking nicer.

So, the question is  .. can OpenGL do the resizing (not just cropping but squashing horizontally) and if so does the appropriate function that renders to a quad require a power-of-two source image ?

How would we know if was been hardware accelerated anyway ?

*

Online reyalp

  • ******
  • 14080
Re: CHDK PTP interface
« Reply #727 on: 29 / February / 2012, 15:45:24 »
Lets not get too deep into graphic APIs here.
By power-of-two, you're speaking about the graphic cards resolutions, right ?  :)
Originally, opengl only supported power of 2 textures. There have been extensions for a long time that support other dimensions, I would guess they are widely supported.
Quote
Higher framerate ? To do what ? Some kind of remote video recording ? One just have to buy a bigger SD card, or think about implementing a compatibility extension for SDHC up to 16GB then, cuz' the camera would do it just fine, in higher resolutions than the crappy viewport's 240 lines.
Not all the cameras are limited to 240 lines. a540 (and presumably most similar vintage digic II) does 528 lines in 640x480 video mode, and some newer cameras like the g12 variety of 480 line modes. So decent resolution video is sometimes available directly over USB.

However, to get nice video you'd also have to somehow synchronize the frame grabs with the camera refresh, or you get tearing.

It would also be very difficult not to have substantial jitter in the frame rate.
Quote
Anyway, its obviousl the old A-series USB peripheral won't handle higher bandwidths. Even with the padding bytes removed from the buffer, I don't think one can achieve more than 5fps anyway...with high hopes.
Only on the very old USB 1.1 cams. The vast majority of CHDK supported cameras are USB 2.0
Don't forget what the H stands for.


*

Online reyalp

  • ******
  • 14080
Re: CHDK PTP interface
« Reply #728 on: 29 / February / 2012, 15:49:42 »
At the moment, I resize from 720x240 to 320x240 in 'C' code.
It is crude nearest neighbour.
I currently do it even cruder (like the CHDK code that reads the viewport): Just skip 2 Y values of every U Y V Y Y Y when doing the initial conversion. If you are going to a 360x240 display anyway, I don't think you are losing much.
Don't forget what the H stands for.

Re: CHDK PTP interface
« Reply #729 on: 29 / February / 2012, 15:54:13 »
So, the question is  .. can OpenGL do the resizing (not just cropping but squashing horizontally) and if so does the appropriate function that renders to a quad require a power-of-two source image ?

How would we know if was been hardware accelerated anyway ?
I'm not sure using OpenGL to do the resizing would be profitable if you absolutely WANT to write a method to do the interpolation.
Fact is, I'm pretty sure that passing a 320x240 P-buffer to and OpenGL's 640x480 viewport would make your card's OpenGL driver implementation do the interpolation automatically, without underlying code on your side. 

Still, have a look at those hqNx filters... I understand they're dedicated to low-resources platforms, and I wonder what they'd give once applied to a video feed. Hey ! It's all INTEGERS magic ! No floating arithmetic here ! (^w^)

I currently do it even cruder (like the CHDK code that reads the viewport): Just skip 2 Y values of every U Y V Y Y Y when doing the initial conversion. If you are going to a 360x240 display anyway, I don't think you are losing much.

Nice trick indeed !  :P
I know from experience than truncating the LSb.s of a 18bits per channel RGB  frame to get a RGB565 works pretty well, without anyone noticing the difference. In all, I never did interpolate anything.
But don't tell anyone else ...  :-X
« Last Edit: 29 / February / 2012, 16:02:41 by asmodyne »

 

Related Topics