Bit depth of raw files? - page 4 - RAW Shooting and Processing - CHDK Forum

Bit depth of raw files?

  • 37 Replies
  • 27508 Views
*

Offline littlejohn

  • *
  • 35
  • Ixus 860is
Re: Bit depth of raw files?
« Reply #30 on: 05 / September / 2008, 13:00:12 »
Advertisements
Hi John,
Those are too many points to be bad pixels.  I seems a random pattern of errors introduced by the processing.   I don't get such points on my camera, a 720is.  Perhaps some other task is interfering with the read or write of the pixel data.
Jon

Hi Jon :)

Good to see your reply.  :D
Yes, the bad pixels are likely to be introduced by processing error.
However, I don't think it's camera dependent problem.

I have inserted your code in my firmware build. (Ixus860)
Also, I myself implemented another version (which I posted lately) in Linux, which reads a CR2 file and generate another one.
My program only reads a CR2 file, retrieves the RGB values, and set the modified values back.
In both cases, the bad pixels remain there.
(In the first case, bad pixels were shown directly on the camera's LCD.
In the second case, bad pixels were shown in the DNG file produced by dng4ps.)

Now I'm thinking of the correctness of the implementation.
If I set all values representing "G" (Green) to its maximum (1023), it is green.
If I set those values to 512, it turns to pink.
Maybe calculating the average has some flaw.
I'll have some more tests in my free time.

Regards,
John

*

Offline ArtDen

  • ***
  • 175
    • dng4ps2
Re: Bit depth of raw files?
« Reply #31 on: 05 / September / 2008, 14:28:31 »
Could you give me some hints about how to remove those bad pixels?
Since all I tried to do is to assign the average value between each neighboring R/G/B,
Yes, this is simpliest and good way to remove bad pixels. You can look how dcraw software does it:
http://www.cybercom.net/~dcoffin/dcraw/dcraw.c ('remove_zeroes' method)

*

ssilk

Re: Bit depth of raw files?
« Reply #32 on: 11 / May / 2011, 18:08:12 »
Hi All,

I'm trying to load CRW files from an SD-1000 in MATLAB, but when I do so they don't look right. I expect them to look approximately like a grayscale version of the image I shot. However, I end up with a lot of vertical lines. I can see the major image elements, but there's clearly something wrong here. I've posted my code below.

Can anyone answer the following?
1. Is the SD-1000 RAW output definitely 10-bit?
2. Is the data big endian or little?
3. Can anyone look at my code and tell me if they can spot the problem?


Code: (matlab) [Select]
% SD-1000 sensor resolution
h=2340;
w=3152;

% Name of file to read
filename='CRW_3160.CRW';

fid=fopen(filename,'r','b');

% Read 10-bit Uint data into columns of height w, then transpose the matrix
rawimg = fread(fid,[w Inf],'ubit10=>double')';

fclose(fid);

% Linearly compress 10-bit data into [0 1] double range
% This should look roughly like a grayscale picture of the scene, which it
% doesn't.
imshow(rawimg/max(rawimg(:)));


*

Offline reyalp

  • ******
  • 14125
Re: Bit depth of raw files?
« Reply #33 on: 11 / May / 2011, 22:54:35 »
http://chdk.wikia.com/wiki/Frame_buffers#Raw may be helpful. You are probably getting tripped up by the packed little endian format. The SD1000 is definitely 10bpp. The important specs may be found in http://tools.assembla.com/chdk/browser/trunk/platform/ixus70_sd1000/platform_camera.h

You can also get C code to read the from http://tools.assembla.com/chdk/browser/trunk/tools/rawconvert.c

You could use rawconvert to convert it into a format that is easier to read in matlab, e.g. 16 bit
Don't forget what the H stands for.

*

ssilk

Re: Bit depth of raw files?
« Reply #34 on: 16 / May / 2011, 16:41:41 »
@reyalp: Thanks for the reply. The bit packing order is definitely what's messing me up. Is there a Windows binary of rawconvert? I tried compiling the c file but I'm missing some of the dependancies and it looks like they're hard to get working in Windows.

*

Offline reyalp

  • ******
  • 14125
Re: Bit depth of raw files?
« Reply #35 on: 16 / May / 2011, 22:05:37 »
@reyalp: Thanks for the reply. The bit packing order is definitely what's messing me up. Is there a Windows binary of rawconvert? I tried compiling the c file but I'm missing some of the dependancies and it looks like they're hard to get working in Windows.
I think rawconvert is quite vanilla C, not sure why you couldn't just use <compiler> rawconvert.c but here's a binary
Don't forget what the H stands for.

*

ssilk

Re: Bit depth of raw files?
« Reply #36 on: 17 / May / 2011, 10:21:53 »
@reyalp: The problem I was running into was that I was missing stdint.h and platform.h. Apparently Microsoft removed these from their standard libraries included with Visual Studio 2008. They've reinstated them in 2010. There's a lot of discussion about this online. I've since been able to compile it myself.

Now I have a question about the usage. I've got a raw file from an SD-1000, so it's 10bpp with the unusual bit-packing order. I use rawconvert to convert it to 16bpp with the following command:

Code: (bash) [Select]
rawconvert.exe -10to16 -w=3152 -h=2340 -pgm CRW_3133.CRW CRW_3133_16.CRW
The pgm looks correct, i.e. it looks like grayscale Bayer mosaiced data. Now I'd expect to be able to load this in MATLAB as 16bpp data and basically see the same thing. I do this as follows:

Code: (matlab) [Select]
filename='CRW_3133_16.CRW';
bpp=16;

fid=fopen(filename, 'r', 'l')

P=fgetl(fid);
w=str2double(fgetl(fid));
h=str2double(fgetl(fid));
maxval=str2double(fgetl(fid));

temp=fread(fid, [h Inf], ['ubit' num2str(bpp)]);

size(temp)

fclose(fid);

min(temp(:))
max(temp(:))

imshow(temp/max(temp(:)),[]);

However, it does not appear to load the data correctly, it's just random looking noise. Is the bit-packing still in an unusual setup? I assumed that in 16bpp it would just be normal uint16.

Can you offer any further advice? Thanks for your help so far.

*

Offline reyalp

  • ******
  • 14125
Re: Bit depth of raw files?
« Reply #37 on: 17 / May / 2011, 12:53:23 »
FWIW, if you do rawconvert without PGM, the file will simply be 16 bit ints in native byte order. (which in practice must be little endian...)

PGM appears to reverse the byte order for 16 bit. According to wikipedia:
Quote
The original definition of the PGM and the PPM binary formats (the P5 and P6 formats) did not allow bit depths greater than 8 bits. One can of course use the ASCII format, but this format both slows down reading and makes the files much larger. Accordingly, many programmers have attempted to extend the format to allow higher bit depths. Using higher bit depths encounters the problem of having to decide on the endianness of the file. Unfortunately it appears that the various implementations could not agree on which byte order to use, and some connected the 16-bit endianness to the pixel packing order.[3] In Netpbm, the de facto standard implementation of the PNM formats, the most significant byte is first.
Don't forget what the H stands for.

 

Related Topics


SimplePortal © 2008-2014, SimplePortal