reading and writing the viewport - General Discussion and Assistance - CHDK Forum

reading and writing the viewport

  • 28 Replies
  • 15069 Views
reading and writing the viewport
« on: 23 / April / 2008, 14:12:20 »
Advertisements
Hi,
I have a new A720IS and just love what I can do with CHDK.  I have been studying the viewport and writing some code to write it to a file and to load it from a file.  I can also put my own images into the viewport.  Since I am new I would like to post what I have learned about the viewport and the routines I have written.  I hope to get some comments and corrections for those places where I have got it wrong.

Largely I got my info about the viewport from studying the histogram code.

On the 720IS the physical screen is 5.0cm x 3.8cm, a ratio of 1.32.  The image aspect ratio is 1.333 so Canon seems to have squashed the physical screen a little.  The physical screen of course consists of red, green and blue dots in a pattern.
Internally the viewport consists of 360x240x3bytes.  This is an aspect ratio of 1.5.  The format is not RGB, but consists of groups of 6 bytes with the format U,Y1,V,Y2,Y3,Y4.  This gives info for 4 pixels with the U,V color information shared for Y1,Y2,Y3,Y4.  The display hardware uses this info to create the pattern of dots that you see.

I can save the viewport data and load it from a file.  I do this by changing the histogram code:
       switch (histogram_stage) {
        case 0:
            img=((mode_get()&MODE_MASK) == MODE_PLAY)?vid_get_viewport_fb_d():((kbd_is_key_pressed(KEY_SHOOT_HALF))?vid_get_viewport_fb():vid_get_viewport_live_fb());
         
           if (img==NULL){
            img = vid_get_viewport_fb();
          }
            viewport_size = vid_get_viewport_height() * screen_width;
            for (c=0; c<5; ++c) {
                for (i=0; i<HISTO_WIDTH; ++i) {
                    histogram_proc[c]=0;
                }
                histo_max[c] = histo_max_center[c] = 0;
            }

            histogram_stage=1;

//  JH code
         if((mode_get()&MODE_MASK) == MODE_PLAY) {
            if (kbd_is_key_pressed(KEY_SHOOT_HALF)) {
               save_viewport(img, viewport_size);   
// alternatively I comment out the above line and uncomment the following line            
//               load_viewport(img, viewport_size, "A/VPDATA/ViewPortData0");                   }
            histogram_stage=0;   
         }
            break;
...

I use the blended histogram so I make this change  to gui_osd_draw_blended_histo

static void gui_osd_draw_blended_histo(coord x, coord y) {
    register unsigned int i, v, red, grn, blu, sel;
    int m = ((mode_get()&MODE_MASK) == MODE_REC);
   if(!m) return;//JH change



Here is my code for the read and load routines:

/* save the viewport data to a file */
static int nsaved = 0;
void save_viewport(char *img, int viewport_size) {
   int fd, i;
   char fn[64];
   if(nsaved > 20) return;
   mkdir("A/VPDATA");
   sprintf(fn, "A/VPDATA/ViewPortData%d",nsaved++);
    fd = open(fn, O_WRONLY|O_CREAT, 0777);
    if (fd>=0) {
      write(fd, img, viewport_size *3);
      close(fd);
      }
   return;
}
/* load the viewport from a file */
void load_viewport(char *img, int viewport_size, char *fn){
   int fd;
   fd = open(fn, O_RDONLY, 0777);
    if (fd>=0) {
      int rcnt = read(fd, img, viewport_size *3);
      close(fd);
      }
   return;
}



This post is getting a little long.  I will post a second time to explain how I get the viewport data into an imaging program (ImageJ) and how I can load my own images into the viewport.

Jon
« Last Edit: 01 / May / 2008, 10:14:59 by hiker_jon »

Re: reading and writing the viewport
« Reply #1 on: 23 / April / 2008, 14:28:18 »
This is the second post.

I use ImageJ to read the viewport data that I have written.  I import the data as raw 360x240 24bit RGB data.  Of course this looks a little weird, but then I convert it to RGB using the routine at the end of this post.  This routine takes a 360x240 viewport image (the imported raw data) and converts it to a 720x240 RGB image.  The aspect ratio is a little strange, but you can use ImageJ to scale it to the correct aspect ratio.

To put any image into the viewport I use ImageJ to scale it to 720x240.  Then I use my routine to put the data into viewport format then save it as raw data.  I can then put the file onto my card and when I half press the shutter in play mode the image will be displayed.

Jon


   /* this is a conversion program for CHDK viewport data */
    private void convert(ImagePlus imp) {
       ImageProcessor ip = imp.getProcessor();
       int width = ip.getWidth();
       int height = ip.getHeight();
       // we use the input image size to determine if we convert to or from the viewport data format
       if(width < 700) {// from viewport data format

          int width1 = (width * 4)/2;
          int w1 = 0;
          ImagePlus imp1;

          imp1 = NewImage.createRGBImage(originalName+"_CNV",
                width1, height, 1, NewImage.FILL_BLACK); 
          ImageProcessor ip1 = imp1.getProcessor();

          int[] pixels = (int[]) ip.getPixels();
          int[] pixels1 = (int[]) ip1.getPixels();


          for (int h = 0; h < height; h++) {
             w1 = 0;
             for (int w = 0; w < width; w+=2) {
                int p = pixels[h * width + w];
                int u = (int) (p & 0xff0000) >> 16;
                int y = (int) (p & 0x00ff00) >> 8;
                int v = (int) (p & 0x0000ff);
                if ((u&0x00000080) != 0) u|=0xFFFFFF00;
                if ((v&0x00000080) != 0) v|=0xFFFFFF00;
         
                int r = clip(((y<<12)          + v*5743 + 2048)/4096); // R
                int g = clip(((y<<12) - u*1411 - v*2925 + 2048)/4096); // g
                int b = clip(((y<<12) + u*7258          + 2048)/4096); // B
                  
                pixels1[h * width1 + w1++] = ((r << 16) &0xff0000)  + ((g << 8) &0xff00) + (b & 0x0000ff);
         
                p = pixels[h * width + w + 1];
         
                y = (int) (p & 0xff0000) >> 16;
                r = clip(((y<<12)          + v*5743 + 2048)/4096); // R
                g = clip(((y<<12) - u*1411 - v*2925 + 2048)/4096); // g
                b = clip(((y<<12) + u*7258          + 2048)/4096); // B
                pixels1[h * width1 + w1++] = ((r << 16) & 0xff0000)  + ((g << 8) & 0xff00) + (b & 0x0000ff);
         
                y = (int) (p & 0xff00) >> 8;
                r = clip(((y<<12)          + v*5743 + 2048)/4096); // R
                g = clip(((y<<12) - u*1411 - v*2925 + 2048)/4096); // g
                b = clip(((y<<12) + u*7258          + 2048)/4096); // B
                pixels1[h * width1 + w1++] = ((r << 16) & 0xff0000)  + ((g << 8) & 0xff00) + (b & 0x0000ff);
         
                y = (int) (p & 0xff);
                r = clip(((y<<12)          + v*5743 + 2048)/4096); // R
                g = clip(((y<<12) - u*1411 - v*2925 + 2048)/4096); // g
                b = clip(((y<<12) + u*7258          + 2048)/4096); // B
                pixels1[h * width1 + w1++] = ((r << 16) & 0xff0000)  + ((g << 8) & 0xff00) + (b & 0x0000ff);

                }
             }
          imp1.updateAndDraw();
          imp1.setTitle(originalName + "_CNV");
          imp1.changes = true;
          imp1.show();
       }
       else {//to viewport data format
          int width1 = (2 * width) / 4;
          int w1 = 0;
          ImagePlus imp1;

          imp1 = NewImage.createRGBImage(originalName+"_CNV",
                width1, height, 1, NewImage.FILL_BLACK); 
          ImageProcessor ip1 = imp1.getProcessor();

          int[] pixels = (int[]) ip.getPixels();
          int[] pixels1 = (int[]) ip1.getPixels();
          for (int h = 0; h < height; h++) {
             w1 = 0;
             for (int w = 0; w < width; w+=4) {
                int p = pixels[h * width + w];
                int r = (int) (p & 0xff0000) >> 16;
                int g = (int) (p & 0x00ff00) >> 8;
                int b = (int) (p & 0x0000ff);
         
                int y = ((1225 * r) + (2400 * g) + (471 * b) + 2048)/4096; // Y
                int u = clips((2314 * (b - y) + 2048)/4096); // U
                int v = clips((2925 * (r - y) + 2048)/4096); // V
         
                pixels1[h * width1 + w1++] = ((u << 16) & 0xff0000)  + ((y << 8) & 0xff00) + (v & 0x0000ff);
         
                p = pixels[h * width + w + 1];
                r = (int) (p & 0xff0000) >> 16;
                g = (int) (p & 0x00ff00) >> 8;
                b = (int) (p & 0x0000ff);
         
                y = ((1225 * r) + (2400 * g) + (471 * b) + 2048)/4096; // Y
         
                p = pixels[h * width + w + 2];
                r = (int) (p & 0xff0000) >> 16;
                g = (int) (p & 0x00ff00) >> 8;
                b = (int) (p & 0x0000ff);
         
                int y1 = ((1225 * r) + (2400 * g) + (471 * b) + 2048)/4096; // Y
         
                p = pixels[h * width + w + 3];
                r = (int) (p & 0xff0000) >> 16;
                g = (int) (p & 0x00ff00) >> 8;
                b = (int) (p & 0x0000ff);
         
                int y2 = ((1225 * r) + (2400 * g) + (471 * b) + 2048)/4096; // Y
         
                pixels1[h * width1 + w1++] = ((y << 16) & 0xff0000)  + ((y1 << 8) & 0xff00) + (y2 & 0x0000ff);

             }
          }
          imp1.updateAndDraw();
          imp1.setTitle(originalName + "_CNV");
          imp1.changes = true;
          imp1.show();
       }
    }
   


« Last Edit: 23 / April / 2008, 14:49:14 by GrAnd »

*

Offline GrAnd

  • ****
  • 916
  • [A610, S3IS]
    • CHDK
Re: reading and writing the viewport
« Reply #2 on: 23 / April / 2008, 14:56:24 »
Nice.
BTW. I mentioned the viewport format in another thread(s):
Quote
More than a year ago when i analyzed the format of viewport buffer I got the picture grabbed from the buffer and decoded:

It's encoded as UYVYYY, So each 6 bytes represent 4 pixels. If you look in histogram code, you can notice that the buffer size is 360*240*3. It's the same as 720*240/4*6 .

The actual shot was:

CHDK Developer.

Re: reading and writing the viewport
« Reply #3 on: 24 / April / 2008, 05:31:59 »
I would like to post what I have learned about the viewport and the routines I have written.


Jon,
       that is very, very useful, thank you for posting.
   
I can think of an application to aid stereo imaging with a single camera.

You take an image and store it.
You move the camera and take another image and store it.
On shutter half-press (if certain option selected) you display a single image, the red component from the first image and the blue and green components from the second image.

That is called an anaglyph image and when viewed with red/cyan glasses will be in stereo.

It will give an indication of whether you have moved the camera too far or not far enough.


Do you know how to do that  ?   :)

You have mentioned my two favoutite applications :  CHDK   and ImageJ !

I will certainly study this and try it out.


David

*

Offline BB

  • ***
  • 164
Re: reading and writing the viewport
« Reply #4 on: 24 / April / 2008, 12:05:48 »
Just an off the top of my head suggestion about the anaglyph image...

Can you use the "Zebra" mode code for this and just replace the Zebra Calculations with the resized/compressed/re-color-mapped "red channel" of your last picture (perhaps grab the "review" photo that is splashed on the screen after the shot? (or even rotate through Zebra/anaglyph if you need to keep the zebra function)...

-Bill

*

Offline GrAnd

  • ****
  • 916
  • [A610, S3IS]
    • CHDK
Re: reading and writing the viewport
« Reply #5 on: 24 / April / 2008, 13:00:05 »
Can you use the "Zebra" mode code for this and just replace the Zebra Calculations with the resized/compressed/re-color-mapped "red channel" of your last picture (perhaps grab the "review" photo that is splashed on the screen after the shot? (or even rotate through Zebra/anaglyph if you need to keep the zebra function)...

We are too limited by bitmap buffer which stores the zebra. It has only 16 color palette (with their variations).
CHDK Developer.

Re: reading and writing the viewport
« Reply #6 on: 24 / April / 2008, 13:05:00 »
Do you know how to do that  ?   :)

Here is how I think it could be done.  The low resolution of color in the screen might be a problem, but it is worth a try.

Viewport saved state starts with 0.
After taking the two pictures go into play mode and view the first picture.  Half press causes the viewport data to be saved.  Viewport saved state set to 1.
View the second picture and  half press.  Since the saved state is 1 we now take the current viewport and the saved viewport, combine them, and load the combined image into the viewport.  Viewport saved state goes back to 0.

Combining:  I guess you want the first viewport intensity as red, current viewport intensity as blue/green.  For each 6 bytes( = four pixels) in the two data sets extract V1Y1, V1Y2, V1Y3,V1Y4 and V2Y1, V2Y2,V2Y3,V2Y4.  Set R,G,B = (V1Y,V2Y,V2Y) for the four pixels and then convert back to the viewport format using the ImageJ code from my second  post.

What a great idea!  I think this could be a great stereo tool.  It would make the camera into a stereo viewer.
« Last Edit: 24 / April / 2008, 13:07:50 by hiker_jon »

Re: reading and writing the viewport
« Reply #7 on: 24 / April / 2008, 13:21:44 »
Actually, "full colour" anaglyphs are often uncomfortable to view because of a 'ghosting' effect when bright colours are not completely blocked by the red or cyan filter.

Is is far easier to judge depth just using 'monochrome' anaglyphs.

I will have to remind myself of the technical details.


David

Re: reading and writing the viewport
« Reply #8 on: 24 / April / 2008, 13:32:49 »
You convert the images to greyscale and take red channel from one image and blue and green channel from second image.

Is greyscale simply the luminance (Y) value in the buffer or does a transformation have to be computed ?

Re: reading and writing the viewport
« Reply #9 on: 24 / April / 2008, 13:50:45 »
You convert the images to greyscale and take red channel from one image and blue and green channel from second image.

Is greyscale simply the luminance (Y) value in the buffer or does a transformation have to be computed ?

Yes, the Y is luminance = greyscale.

 

Related Topics


SimplePortal © 2008-2014, SimplePortal