I took two pictures, one using rs -raw and the other with rs -dng commands in CHDK PTP. The DNG opens up in all softwares with a black border around the image plus the entire image area of the sensor is visible. The RAW, on the other hand, opens as RAW image data in other software, and produces a black and white version of the image with the black border, with a lot of noise. However, RawTherapee can open this RAW normally, but for some reason, the black border is completely gone and it even slightly crops the image, even though, from what I can tell, all cropping and processing is turned off. Any ideas? Thanks!
With -dng, you get a DNG format image, which includes metadata and should be loadable by any software that correctly implements DNG support. With -raw, you get "CHDK raw" which is just a framebuffer dump, and is only supported by software that implements CHDK raw support for the specific camera. The underlying raw data is effectively the same in either case, but software may render them differently.
For some weird reason, when I do the plain RAW, it crops the image. I thought it's supposed to be the RAW framebuffer, so technically it shouldn't cropping either. It should be the full sensor image, and yet, it's cropping. Is it an issue with CHDK PTP, or is there a reason for this? Thanks.
Perhaps maybe you know of a software that I could use to edit/view the RAW raws? Because I was using RawTherapee and RawHide. Both softwares had this crop. Those were RAWs from my S3IS. When it came to my SX40HS, they couldn't even open the RAW. I have no clue what to do. Since you've been doing this for many years, you probably have some expertise in this area, and perhaps know of a software that can do this.
dngdump -rfmt=16pgm -raw
The reason I want the RAW raws, is to do some sensor testing. I'm interested to see how many dead pixels I have
and also I want to see if I can unlock that black border of dead sensor to perhaps make it work, bringing my 12.1MP to a full 13MP.
Afterwards, I have a certain idea to test. I want to see if I can use CHDK PTP while recording a video, and seeing if CHDK PTP can offload each RAW movie frame as soon as it hits the movie buffer.
After all, the frames have to be captured first, meaning they are RAW before they hit the conversion. If I can grab a hold of them, through the CHDK PTP, I can get my laptop to do all the heavy lifting, while the camera just relays the data over USB. Since upon some research, it seems that even if I could make it happen in-camera, the SD Card can't take such high speed RAW offloading, but USB might.
IMO, there is no raw data visible to the main CPU during movie recording. Whatever YUV data eventually gets turned into compressed video is generated directly during readout, similar to how live view works, and indeed, in some cases the live view buffer is use directly for video. So at best, you will find uncompressed YUV at the same resolution as the native video file.It has to be this way, since the system does not have the bandwidth to read out and process full resolution images at video frame rates, and again, we know the live view works this way.
So, you're saying it's possible to get raw 640x480 video? Since you mentioned I can find uncompressed YUV at the same resolution.
Also, you talked about bandwidth and read-out limitations, so what if we cheat a bit? Instead of filming at full-resolution, why not film at 720i? The camera is capturing a handful of lines, instead of progressive.
Hmmm, alright. But I am still kind of confused. Videos are made by the sensor, thus the sensor must be capturing 30 frames per second, and then the camera has to convert the frames to the AVI. The frames must be getting stored somewhere before being saved, at least I think. I mean, the sensor always takes in a RAW image frame, and each frame goes somewhere.
Regarding dumping live view data, it seems that sometimes we get frame-skips, and the resolution is completely off. When I dumped some of the buffer stuff using "rmem 0x1900 10485760 -f=10megs.bin.raw", and opened it in TiledGGD, I got the overlay data being 360x240pixels. Apparently at 8bits/pixel, but for some reason, live view either stretches everything, or warps everything, and everything is all over the place. I mean I'm getting 720x540pixels. Upon some further inspection, a heavily "liney" and gritty version of the Canon Splash Screen appeared, also at 360x240. A bit lower down, was a very grainy and "liney" image of the live view, and all colors are completely off, but still 360x240. And lower down was a much larger version of this live view image, also very grainy and "liney", and everything is black and white or weirdly colored stuff.
Started by RaduP
General Discussion and Assistance
Started by hailstorm
RAW Shooting and Processing
Started by acorrias
Started by cybercom
« 1 2 »
Started by fudgey
RAW Shooting and Processing