Nice work DragonLord!
I didn't have the time to look further in the topic.... so no news from me.
However, either something doesn't add up, either I don't understand the relation between camera resolution an sensor resolution.
I mean - if the raw ccd dump has 2 bytes per pixel, where are the color sub-pixels? I would have expected, for a 4000 x 3000 resolution, with 12 bits per color, to have around 4000 x 3000 x 1.5bytes x 4subpixels = 72MB. Or at least around that value. So how come the output file has only 2 bytes per pixel?
About the YRGB file - have you made the difference between the bitmap file and the jpeg file and the result was 0? Or have you compared them only visually? Like I said, I didn't look forward into the C code to see how the data is converted so I may ask - at what stage in the JPEG conversion is this YRGB file saved?
As a note - the 1.5 scale between image size and file size makes for 12 bit per pixel.
Another point, with the exception of YRGB file, all other files seem to have around one byte per pixel (one of them is a bit larger, maybe a header in the file with some other info) - so again, where are the color subpixels? Or, again, they are some intermediate file from the JPEG conversion?