Multi-camera setup project. - page 28 - Creative Uses of CHDK - CHDK Forum

Multi-camera setup project.

  • 462 Replies
  • 212276 Views
*

Offline rick

  • *
  • 11
Re: Multi-camera setup project.
« Reply #270 on: 28 / November / 2014, 11:16:20 »
Advertisements
Very nice work :)
@ reyalp:Is it possible to improve the Synchronization of the multicam way?
I'm not sure which commands you used. To get the best sync you should
1) Use CHDK 1.3
2) init_sync
3) preshoot and wait for all cameras to be ready
4) ensure sync time is far enough in the future that the shoot command can be sent to all cameras.
5) shoot using the shoot_hook_sync command

If you post the exact code you used to shoot, I can probably tell you if anything is missing. The testshots function might be a useful example.
I didn't run step2 init_sync. And I will replace shoot command with shoot_hook_sync command.
Do I need run step2 each shot or just sync one time before taking shots?

Re: Multi-camera setup project.
« Reply #271 on: 28 / November / 2014, 15:09:46 »
If more movement is involved i believe that we will have problems :)

...[]...

Finally if you are thinking of making 3d models with the whole setup you are making , lets hope you have good 3d modelling skills , because the produced 3d model needs  A   L O T  O F   W  O R K  to become ready for printing or whatever...low quality of these canon series , don't produce the best images for 3d modelling...
PS: we have bright lights in the shooting area..we don't mess with shutter speed , only iso and zoom.Some photos are taken bright and few (2-3 out of 64) a bit darker...so maybe we should mess with shutter speed at some point..but we can handle the difference in light in those few photos in the 3d modelling procedure for the time being.

Some related and hopefully useful info:

http://www.agisoft.com/forum/index.php?topic=1972.msg10520#msg10520
"Does the displacement noise come from movement of the person? jpg noise in the image? blurriness of photo? (I did not include images to analyze where the image was not perfectly sharp)"

=>

http://www.agisoft.com/forum/index.php?topic=963.msg4689#msg4689
"...Now, what does all this tell us about using RAW or JPG in PhotoScan?
(a) The lowest possible ISO value should be used to reduce sensor noise.
(b) Underexposure and dark areas in images should be avoided because sensitivity to subtle brightness differences (and, as a result, feature detection) in dark areas is poor.
(c) RAW images suffer much less from quantisation noise and not at all from loss of image detail due to noise removal  and compression artefacts, but they contain all sensor noise (which can be severe). JPG images, on the other hand, often have less (visible) noise than raw images because of the in-camera noise removal and are much smaller files which are processed much faster. If your camera has very low sensor noise, RAW will be the better choice. For many consumer cameras, JPG may be the better choice unless you are able to apply a better sensor noise removal than the camera itself.
(d) In the end, it all comes down to SNR. You will always have some noise, and you should not only try to reduce noise but also to enhance the signal: a good lens, perfect focus, and an illumination which brings out as much fine detail as possible while not producing underexposed areas. In many cases, more photos taken closer to the subject will also help a lot...[]...Overexposure is different from underexposure in that the problem is not quantisation error but values exceeding the range of values (i.e., everything that is brighter than whatever is equivalent to a value of 255 will also have the value 255).
"

&
http://www.agisoft.com/forum/index.php?topic=960.msg4692#msg4692
Quote from: Infinite
Quote from: RalfH
I agree with Mr. Curious but want to stress that "bright lighting" is too simple: ideally, the lighting should be such that small detail (e.g. skin pores in this case) will be enhanced rather than subdued in the images. Visually smooth surfaces are really difficult because PhotoScan needs to be able to detect features. Experimentation with multi-directional vs. homogenous diffuse lighting might be interesting in this respect.

The reason bright light is highlighted here is because of the experimentation with continuous light. To mimic the same light levels one gets with flash light which is VERY bright but at a short burst of time, anything around 1/10,000th of a second. To match that same level of light to use similar camera settings, low ISO (very important) but high exposure speed 1/100th you need bright light just to match that same quality. Obviously not over bright as this will wash out any details. The alternative is noise projection but this introduces other problems of being able to capture a color pass quickly straight after, or using additional separate texture cameras. Multi directional lighting is possible but you need VERY fast capture if you are doing scanning of live subjects.

In summary, there's a lot of variables and avenues to explore. Since we're stuck with the lens & small sensor, flash (or perhaps even two stage flash - one stage including noise projection)  may be worth investigating.

PS:
the small sensor benefit expressed well in the same thread:
http://www.agisoft.com/forum/index.php?topic=960.msg4746#msg4746

Quote from: RalfH
Hello James,

full format sensors usually have better image quality (e.g. less sensor noise and less "bleeding" between pixels). For close-range applications they are often not advantagous because at the same field of view (larger sensor in combination with a longer focal length) they have a smaller depth of focus. For facades of buildings, air photos etc., full format sensors would be preferable, for close-range applications you'd have to find a compromise between image quality and depth of focus.

« Last Edit: 28 / November / 2014, 15:50:46 by andrew.stephens.754365 »

*

Offline reyalp

  • ******
  • 14126
Re: Multi-camera setup project.
« Reply #272 on: 28 / November / 2014, 15:43:52 »
Do I need run step2 each shot or just sync one time before taking shots?
Just once per session. The "sync" process attempts to create a mapping between the tick counter on each camera and the PC system clock, so the you can issue a command to shoot at PC clock time X and multicam translates this into a camera tick time.

Also note that on some Windows PCs the system clock precision can also be quite low (~15ms). In this case, init_sync will not be very effective. In this case the "send" times in the init_sync output will show up alternating 0 and 15ms values.

The whole sequence should look like this
One time actions
Code: [Select]
> !mc=require'multicam'
> !mc:connect()
+ 1:Canon PowerShot D10 b=\\.\libusb0-0001--0x04a9-0x31bc d=bus-0 s=...
> !mc:start()
> !mc:cmdwait('rec')
1:rec
> !mc:init_sync()
1: send 3 diff 11 pred=53251 r=53260 delta=-8
...
1: send 6 diff 221 pred=53461 r=53470 delta=-8
1: ticks=10 min=-11 max=1 mean=-6.993300 sd=3.577292
1: sends=10 min=2 max=7 mean=4.400000 sd=1.800000
minimum sync delay 6
Note I suggest switching to rec mode before doing init_sync: The camera CPU is busier in rec mode, so the latency is higher.

The "minimum sync delay" gives you the calculated time required to send a command to each camera (this will increase linearly with the number of cameras). When you shoot, the syncat must be at least this far in the future. The value is a rough approximation, so I'd suggest adding some margin on top of that

Shoot a shot
Code: [Select]
> !mc:cmdwait('preshoot')
1:preshoot
> !mc:cmdwait('shoot_hook_sync',{syncat=100})
1:shoot_hook_sync 499081

I would be very interested to see how this compares to your previous tests, it should be significantly better.
Don't forget what the H stands for.

*

Offline mphx

  • ***
  • 210
Re: Multi-camera setup project.
« Reply #273 on: 28 / November / 2014, 16:20:31 »
@andrew

We have spent endless time and done endless tests to see how to avoid noise in the photos.

There are two solutions :

1.Use dslr cameras :)
2.Use some degree of zoom.In my case this is impossible.Because as discussed in earlier posts there is a minor distortion in the edge of the photos.
Another way to emulate "zoom" is to put cameras too close to the center , where the person will be standing.
For this to work you need let's say double the number of cameras to achieve the same result as before since now cameras will "cover" only parts of the person being shot and not the whole person.
So you need larger number of cameras to split the person into smaller groups of cameras .

Just a reminder we use the lowest possible ISO (100 if not mistaken) and a lot of strong lights...but noise still remains.

Photoscan does a good job creating models...but in some cases where the outline of the person has the same color with background...it gets messy...
I have spent endless hours "masking" photos (if you have used photoscan , you know what i am talking about :) )..to get a better result..so we won't have to sculpt and smooth the produced model a lot.


PS: Things are a bit better with dng format (raw format) because we can mass manipulate photos in photoshop and remove noise BUT jpeg photo is like ~4mb and dng photo is like ~20mb.
Transferring photos from cameras to pc take ages with dng..not very practical...so we are sticking to jpeg for the time being..since the job is done with them...
« Last Edit: 28 / November / 2014, 16:22:56 by mphx »

*

Offline rick

  • *
  • 11
Re: Multi-camera setup project.
« Reply #274 on: 29 / November / 2014, 04:37:19 »
Update the sync results:
https://drive.google.com/open?id=0B4hGaTy8v6W1amdxbDNyOHJ4UUE&authuser=0

After optimizing the codes,I tested them again and maked a comparison.It really took me some time.

In the end,I find that reducing the syncat number as far as possible can improve the results.
When I set syncat =50,I got the best result which is less ten times of the result of bcam way.
Of course,the setting is depanding on the number of your cameras.

And if you want to download last shot,you‘d better sync the cameras again before next shot.
Thus you could get better results.

Anyway,I think the multicam way is a worthy of choice method:)

Re: Multi-camera setup project.
« Reply #275 on: 29 / November / 2014, 10:45:35 »
more great ino mphx.

Re:

..but noise still remains.

mine is a note only: whatever your shutter speeds they are many multiples the length of a flash i.e. what "noise" is being referred to...is the point cloud of a stuffed dummy equally poor.

Here's a link on the off chance you are not aware of semi-automatic masking http://www.agisoft.com/forum/index.php?topic=2846.msg15090#msg15090

« Last Edit: 29 / November / 2014, 10:58:32 by andrew.stephens.754365 »

*

Offline mphx

  • ***
  • 210
Re: Multi-camera setup project.
« Reply #276 on: 29 / November / 2014, 11:06:53 »

Here's a link on the off chance you are not aware of semi-automatic masking http://www.agisoft.com/forum/index.php?topic=2846.msg15090#msg15090

I was playing with semi-automatic masking in photoscan yesterday...it's not that bright news as they are discussing in that thread.

You have to align photos -- build dense cloud (the most time consuming step of all) - build mesh

then import masks from model ...and then re-do the above steps (ok you can skip align photos at this point...)

So you do almost double job...

I did a test yesterday with align photos (HIGH setting) - dense cloud (MEDIUM/MILD setting) - build mesh (HIGH setting) and then i tried importing mask from model..result was almost perfect...BUT minor adjustments were needed to all photos...not a small job either...

So in conclusion in order "semi automatic masking" to work well..u need align photos/dense cloud/mesh with high/medium/high AT LEAST and then some manually work on the masks..and then again dense cloud/mesh.

This is pointless unless someone has a good pc with a good gpu..using the latest pre-release version of photoscan (1.1.0)..which has ridiculous big improvements in dense cloud building times.

/offtopic off :)

PS : I am a bit of perfectionist , so if i mask something..i go to pixel level :) My best masking took like 1,5 day :P
Now i simply clean the "garbage" around the model after dense cloud step ...

PS2 : Best way to mask images in my opinion is "import masks from background".The idea is , you take shots of the client and then you take ONE shot in empty studio...then you import this shot as a mask to every each one of the photos...
We are thinking to implement that in future shots.Best masking ever :)
« Last Edit: 29 / November / 2014, 11:11:39 by mphx »

Re: Multi-camera setup project.
« Reply #277 on: 29 / November / 2014, 13:12:35 »
Update the sync results:
..
Anyway,I think the multicam way is a worthy of choice method:)
This conclusion confused me until I studied your attachments more carefully.  I think what this shows is that using a USB hardware sync solution (e.g.bcam)  still results in about 10x better sync precision than the best ptp sync option (e.g.multicam)?  But your conclusion is that the multicam solution is "good enough"?
Ported :   A1200    SD940   G10    Powershot N    G16

Re: Multi-camera setup project.
« Reply #278 on: 29 / November / 2014, 14:24:46 »
PS2 : Best way to mask images in my opinion is "import masks from background".The idea is , you take shots of the client and then you take ONE shot in empty studio...then you import this shot as a mask to every each one of the photos...
We are thinking to implement that in future shots.Best masking ever :)

But be careful with that too e.g.

http://www.agisoft.com/forum/index.php?topic=2208.msg11686#msg11686

Quote from: tommyboy
Hi all.  We've been using PhotoScan's built-in tool to get masks from background images (i.e. a second set of clean plate/empty background images taken right after the subject is photographed).  It does quite a good job, but seems to have the following issues:

1) the edge of the mask is often too generous in letting in the background, which lets through the white background; this is particularly problematic when creating textures for hair and areas where 'webbing' is likely (fingers, armpit, crotch)
2) it often ends up masking out dark portions of the subject if they are in the same spot as a camera lens in the background image
3) due to diffuse reflection of the subject on our white floor, the contact point of their feet with often includes a significant portion of the floor around their feet

So far we have been manually cleaning up these problem features using the PhotoScan tools, which ends up taking a good 60-90 seconds per photograph on average.  Obviously I'd like to take this number down to zero, so have been looking at alternative methods for background subtraction, which I could then feed into PhotoScan.  This project looked like a good option, but ended up having its own problems, and being rather slow:

http://docs.opencv.org/trunk/doc/tutorials/video/background_subtraction/background_subtraction.html

Is anyone else using external programs for mask generation?  I should know the answer to this but for some reason it is eluding me...

Thanks!


but only 2) & 3) releavant:

http://www.agisoft.com/forum/index.php?topic=2208.msg11725#msg11725

Quote from: Infinite
Quote from: tommyboy

1) the edge of the mask is often too generous in letting in the background, which lets through the white background; this is particularly problematic when creating textures for hair and areas where 'webbing' is likely (fingers, armpit, crotch)


It doesn't really matter how accurate the masks are in this regard as Photoscan still doesn't take into account masks during hole fill stage. Webbing will always occur until hole fill is taking masking into account. I believe this is a complex problem to solve.

It's just as fast to do your editing after dense point cloud reconstruction. Edit / delete the points directly in 3D, then build the mesh. Photoscan will still web but it will produce better results and will be faster than manually editing each mask image by hand.


 

*

Offline mphx

  • ***
  • 210
Re: Multi-camera setup project.
« Reply #279 on: 29 / November / 2014, 15:15:39 »
@andrew

well...

1.hair are always a problematic area...much manual work is done always there..so either with mask or not..the work needed is more or less the same...
Fingers that get really screwed up are replace by fake ones :) No client can tell the different when he/she gets their miniature in their hands :)
Everything else in the model is fixable...
2.this problem described here is occured in "import mask from model" method..aswell..so...no gain or loss here.
3.we use a colored base where people are standing to get their shot.We need this base to exist in the produced model for measuring reasons (dimensions of the model in 3ds max when we get it ready for printing).
So no problem here :)

The whole idea in "masking" is to mask photos with optimal way...and by optimal i mean..don't waste a lot of time masking since a more rough masking would lead to the same results...

At the time being...i don't mask anything..just clear the garbage after dense cloud..and see what result i get.
If the client was wearing trouble making colors or clothes..then i do manually masking.
My friend does a fast masking..a rectangle around the person , just to get the background out of the way , to decrease times when doing align and dense cloud...
Produced result is workable so...we are good :)
« Last Edit: 29 / November / 2014, 15:19:50 by mphx »

 

Related Topics


SimplePortal © 2008-2014, SimplePortal