Multi-camera setup project. - page 29 - Creative Uses of CHDK - CHDK Forum

Multi-camera setup project.

  • 462 Replies
  • 216563 Views
*

Offline reyalp

  • ******
  • 14128
Re: Multi-camera setup project.
« Reply #280 on: 29 / November / 2014, 16:13:16 »
Advertisements
After optimizing the codes,I tested them again and maked a comparison.It really took me some time.
Thanks for testing and reporting.
Quote
In the end,I find that reducing the syncat number as far as possible can improve the results.
When I set syncat =50,I got the best result which is less ten times of the result of bcam way.
Of course,the setting is depanding on the number of your cameras.
If this is correct, it is a bug. All syncat should do tell the cameras to fire N ms in the after the command is issued on the PC. If you set it too short, then some cameras will not have received the command. Any larger value should have the same sync accuracy, only the latency will be higher. Clock drift should be insignificant on the timescale involved.

Also note that syncat should be used with shoot as well, otherwise you are getting completely unsynchronized shooting, just firing as soon as the command arrives. With only two cameras, this may work reasonably well, since transmission time is well under 10ms for cameras after digic II.

All that said, I think ~30 ms max deviation is about the as good as the current code can do:
The 10ms tick counter means that the "same" time on two cameras could be ~20ms apart, and the handoff from kbd_task to capt_seq that uses a 10ms sleep loop.
Don't forget what the H stands for.

*

Offline rick

  • *
  • 11
Re: Multi-camera setup project.
« Reply #281 on: 29 / November / 2014, 21:48:20 »
Update the sync results:
..
Anyway,I think the multicam way is a worthy of choice method:)
This conclusion confused me until I studied your attachments more carefully.  I think what this shows is that using a USB hardware sync solution (e.g.bcam)  still results in about 10x better sync precision than the best ptp sync option (e.g.multicam)?  But your conclusion is that the multicam solution is "good enough"?
:D
No doubt, using a USB hardware sync solution (e.g.bcam) is the best way! My description and attachments are all showed this point.
If someone receive ~30 ms max deviation of ptp sync option,the multicam way is a worthy of choice method. :)

@ waterwingz:Now I use three a1200s,I can shoot just by controlling the switch of the central usb hub to disconnect the USB 5V power without any modification(see b.jpg). Could I still use this method when using 40+ cameras?
@waterwingz 
Could you give me some advice?Thanks! :)
« Last Edit: 29 / November / 2014, 21:57:30 by rick »

*

Offline rick

  • *
  • 11
Re: Multi-camera setup project.
« Reply #282 on: 29 / November / 2014, 22:00:40 »
@andrew

The whole idea in "masking" is to mask photos with optimal way...and by optimal i mean..don't waste a lot of time masking since a more rough masking would lead to the same results...


I agree with mphx!

Re: Multi-camera setup project.
« Reply #283 on: 30 / November / 2014, 10:38:35 »
post-processing this could be a real headache https://www.youtube.com/watch?v=kF8SYObx1Fg. Wonder what's in the 5Mpix boxes.

Edit: having said that, they capture 2 frames to each camera (admittedly it takes 0.2secs). The second with noise projection with 300 ms flash (seems very long)

Edit2: same guys (I think) http://www.agisoft.com/forum/index.php?topic=1802.msg11496#msg11496
« Last Edit: 30 / November / 2014, 12:50:13 by andrew.stephens.754365 »

*

Offline mphx

  • ***
  • 210
Re: Multi-camera setup project.
« Reply #284 on: 30 / November / 2014, 15:57:47 »
post-processing this could be a real headache https://www.youtube.com/watch?v=kF8SYObx1Fg. Wonder what's in the 5Mpix boxes.

Edit: having said that, they capture 2 frames to each camera (admittedly it takes 0.2secs). The second with noise projection with 300 ms flash (seems very long)

Edit2: same guys (I think) http://www.agisoft.com/forum/index.php?topic=1802.msg11496#msg11496

First of all i can't imagine how crappy those camera modules of pi can be :)
I don't know if it has any point getting 100 pi or less digital compact cameras...more or less the result would be similar...
Second of all , projecting a noise on the "object" is something that we have thought..for cases where the colors of the person being shot are difficult to handle. (we are thinking some kind of projector on the roof of the studio projecting a pattern all over the place...thus all over the person too....in todo list.)
But this doesn't solve any background-problems or masking for that matter...it only helps modelling..not what will happen in the edges of the model if the colors will blend with the background...
Although i don't understand what he means by "noise on the object"..won't the noise go all over the place?
Not clear enough for me....

*

Offline ahull

  • *****
  • 634
Re: Multi-camera setup project.
« Reply #285 on: 30 / November / 2014, 18:04:53 »
Removing or replacing the background is greatly simplified if the background is a uniform color.

http://en.wikipedia.org/wiki/Chroma_key

Shooting through small holes in a colour separation overlay screen would seem like an avenue worth exploring.

Making a suitable CSO screen should involve little more than dying some white sheets as closely as possible to the same shade with a suitable hue chosen depending on the type of modeling you are doing.

Arrange the lighting to minimise shadows on the blue/green/cyan or whatever screens (and of course take great care to ensure the screens and light are not a fire hazard).
« Last Edit: 30 / November / 2014, 18:11:51 by ahull »

Re: Multi-camera setup project.
« Reply #286 on: 30 / November / 2014, 19:22:16 »
Removing or replacing the background is greatly simplified if the background is a uniform color.

http://en.wikipedia.org/wiki/Chroma_key

Shooting through small holes in a colour separation overlay screen would seem like an avenue worth exploring.
A Studio "Bullet Time Rig -SETUP" Example is Here

http://www.timesplice.com.au/images/1.jpg

Camera Array - 360 Degree Rig here

http://www.timesplice.com.au/360-camera-array.html

The 60 cameras is equivalent to 59 stereo pairs. The [interpolated, 1000 frame] Example link is the bottom.

H-H

« Last Edit: 30 / November / 2014, 19:38:17 by Hardware_Hacker »

Re: Multi-camera setup project.
« Reply #287 on: 01 / December / 2014, 11:02:01 »
we are thinking some kind of projector
mphx,
 
looking at that pi setup, I don't think my previous description "The second with noise projection with 300 ms flash (seems very long) " is accurate - I don't think the projectors have been hacked for flash. It may be the first frame is captured with both led's and projectors on (the projection being washed away by the led's) and then turn the led's off for the second frame with the projectors.
 
I hadn't really thought about projection with ptp trigger in your setup - but could it be as "straightforward" as setting your shutters to, say, 20ms exposure period and then using  a syncat for the second half of your cam count (or an otherwise appropriate quantity) as 20ms+30ms+5ms = 55ms  (shutter+expected sync period across all cams + 5ms margin) greater than for the first group ?

Of course, that would need a switch to activate your gui button (edit of multicam.lua) and turn the led's off (down rather - see update below) during the "margin" time (http://chdk.setepontos.com/index.php?topic=11667.msg117463#msg117463)

I've seen enough pictures with normal projectors to make it feel like the split cam count for each of geometry / texture could be worth it. 

Update:
http://www.agisoft.com/forum/index.php?topic=2305.msg16448#msg16448

Quote from: 3dmij
We use LED strips, powerful ones. 19watts per meter. We have 20 poles of 2 meter, so 760 watts of LED at about 90cm away from the person.

We have dimmed the LEDs a bit, not using PWM but decreasing the voltage from 24v to 19volt. This is to allow our projection system still to be able to project over the LED light.

« Last Edit: 02 / December / 2014, 12:52:58 by andrew.stephens.754365 »

*

Offline mphx

  • ***
  • 210
Re: Multi-camera setup project.
« Reply #288 on: 01 / December / 2014, 15:20:19 »
@andrew

i was talking about "projection" with my friend on this project.
The only viable solution is the following..

2-3 projectors projecting a pattern of dots from all possible angles.
Pattern not very thick..all shootings will be done with this pattern at all times...
Then the usual drill.. photoscan ..3ds max and stuff...when all is done..load the texture jpeg in photoshop and "remove" dots.Job done.

Re: Multi-camera setup project.
« Reply #289 on: 02 / December / 2014, 22:56:07 »
@ ahull, Reply #285, Thanks for your Post, it directly lead me to a possible additional solution, see below.

The Studio " Studio "Bullet Time Rig -SETUP" by Mark Ruff, see Post #286, is an Inwardly Pointing Camera Array of 60 Cameras
that MAXIMIZES the Parallax in the Horizontal Plane, and MINIMIZES the Parallax in the Vertical Plane.

@ CHDK Bullet Time Projects; See, Edit &1
This also applies to Zcream's now abandoned Project, Novsela and Nafraf Project's.

@ Mphx "Multi-camera setup project" is an Inwardly Pointing Camera Array of 64 Cameras
that MAXIMISES the Parallax in the Horizontal Plane, and also MAXIMIZES the Parallax in the Vertical Plane. See, Edit &1

@ Mark Ruff then in post processing, "Pictorially" recombines the Inwardly Pointing (i.e. MAXIMUM Parallax) Camera images with a separate Outwardly Pointing Camera image.
 (i.e. MINIMUM Parallax, i.e. Panoramic) For Example, see the Link in Post #286.

@ Mphx #0, in post processing, then uses Agisoft PhotoScan to extract (Inwardly Pointing) Horizontal and Vertical Parallax Image Data to then construct the PhotoScan 3D Mesh. Also see Post 269
Also, Mphx #1, could produce a Agisoft PhotoScan with NO subject (i.e. NO central observer) for the Calibration of the Multi-Camera rig. see below.

~~~~Omnipolar Camera:~~~~

A New Approach to Stereo Immersive Capture
By Vincent Chapdelaine-Couture and S´ ebastien Roy
D´ epartement d’Informatique et recherche op´ erationnelle
Universit´ e de Montr´ eal (Qu´ ebec), Canada

PDF Extract:-

"...We introduce in this paper a camera setup for stereo immersive (omnistereo) capture.
An omnistereo pair of images gives stereo information up to 360 degrees around a central observer..."

For, This post, a "central observer" is equivalent to the "Surface" of the PhotoScan 3D Mesh.

~~~~EpiPolar Geometry~~~~

2503: EpiPolar Geometry by
A.D. Jepson and D.J. Fleet, 2006, PDF, Page: 1

PDF Extract:-

"...We consider two perspective images of a scene as taken from a stereo pair of cameras
 (or equivalently, assume the scene is rigid and imaged with a single camera from two
different locations)..."

"...The relationship between such corresponding image points turns out to be both simple and useful..."

@ Mphx #2, Agisoft PhotoScan already Semi-Automatically produces the required, Outwardly Pointing Camera, 3D Mesh Data, its The 3D Position of the Cameras.

@ [AgiSoft]; Re: Bumpy Suface
« Reply #17 on: September 17, 2014, » Hello Brit,

The basic principles of image acquisitions and pre-processing are the following:-
* do not crop original images,
* do not apply geometrical transformations (rotations or deformations) to the images,
* use image frame effectively,
* provide sufficient overlap and coverage of the surface being reconstructed.

Additional recommendations:-
* use lower ISO values,
* provide good focusing and sufficient focal depth to acquire sharp images,
* avoid using flash,
* if different focal length are used, make sure that cameras are grouped correctly into calibration groups
 (in Tools Menu -> Camera Calibration window).

Masking can be performed semi-automatically [#2] - at first you can generate masks from model and then adjust them manually. [#3]
 
Best regards, Alexey Pasumansky, AgiSoft LLC

@ [AgiSoft]; Re: Improved masking options for Human Scanning
« Reply #4 on: April 05, 2014
"...It's just as fast to do your editing after dense point cloud reconstruction..."
"...Photoscan will still web but it will produce better results and will be faster than manually editing each mask image by hand..."
"...It doesn't really matter how accurate the masks are in this regard as PhotoScan still doesn't take into account masks during hole fill stage..."

@ Mphx #3, Then a further small amount of [manual] (???) post processing, of the 3D Position Mesh Data of the Cameras is then done by:-

3D Printing and Data Visualisation:-
A Technology Briefing by Paul Bourke iVEC @ UWA
Slides online here: http://paulbourke.net/3dprint2014/

PDF Extract:- "...Geometry: Thickening Surfaces..."

• Solution is called “Rolling Ball” Thickening.
• Imagine a ball rolling across the surface, form an "INTERNAL" [external] mesh along the ball path.
• Implemented in Blender as a modifier called “solidify”.
• Modifiers are elegant since they don’t permanently affect the geometry, can be changed later.

@ Andrew (A Re-post is OK), or Anyone; Confused, Comments.

@ PhotoScan Processing Summary and Valid Data:-

* PhotoScaned Images Mapped on to the Exterior Surface of PhotoScaned 3D Mesh.

* PhotoScan Mask is on Interior Surface of the Camera Position 3D Mesh.

* Optional "INTERNAL" Thick PhotoScan Mask is also on Interior Surface of Camera Position 3D Mesh.

* "...PhotoScan will still web..."

* "...PhotoScan still doesn't take into account masks during hole fill stage..." 
 
H-H

Edit &1

Automatic Disparity Control in Stereo Panoramas (OmniStereo)

By Yael Pritch Moshe Ben-Ezra Shmuel Peleg
School of Computer Science and Engineering
Hebrew University of Jerusalem 91904, ISRAEL

See:- Figure 1. No arrangement of two single viewpoint images can give stereo in all viewing directions.
For upward viewing the two cameras should be separated horizontally,
and for sideways viewing the two cameras should be separated vertically.
« Last Edit: 02 / December / 2014, 23:13:13 by Hardware_Hacker »

 

Related Topics


SimplePortal © 2008-2014, SimplePortal