Thanks. I don't really understand the inner workings of the camere, so cannot estimate myself, but I suppose that a continuous long exposure would be superior to the addition of several shorter exposures of the same total length. I suppose the main source of error could be the process of quantizing the 'illumination' levels of each pixel into the color depth the camera uses. Do you know how to estimate that?
I suppose that during a long exposure time the 'value' of a particular pixel is more finely quantised (electron charges?) than the number of numerical levels in gets rounded to once being read out. Is it true? That would possibly make the result of the combined exposures worse than the single long one.
Problem: I do astrophography and the sky moves due to the rotation of the Earth. I do have a mount that mechanically corrects for that but working with very long exposures (30s at least) and high magnifications (40x at least) means that the mount's correction is not exactly right (it is enough to be 10px of to loose the advantage of the long exposure since the light spreads out and one looses details). Though it is possible to do several minutes without the telescope (1x magnification), applying the magnification increases the problems magnification-fold.
Idea: In the old days of photographic plates, astronomers had another telescope attached to the big one and checked for any star movement. If they saw some, they correct it manually. Since I have only one telescope (which is quite heavy for the mount already), I wanted to be able to see whether the stars actually drift during the exposure and correct for it.
Limitations: The idea with scripting a series of shorter exposures is a good one, though I am worried that the crude quantisation one applies by doing so might smear out the fine details I am going for in the first place. I have already tested such an approach (with JPEG, not RAW though - might be an issue?) and it seems not to be that helpful.
Thanks for answers!