@ ahull, Reply #285, Thanks for your Post, it directly lead me to a possible additional solution, see below.
The Studio " Studio "Bullet Time Rig -SETUP" by Mark Ruff, see Post #286, is an Inwardly Pointing Camera Array of 60 Cameras
that MAXIMIZES the Parallax in the Horizontal Plane, and MINIMIZES the Parallax in the Vertical Plane.
@ CHDK Bullet Time Projects; See, Edit &1
This also applies to Zcream's now abandoned Project, Novsela and Nafraf Project's.
@ Mphx "Multi-camera setup project" is an Inwardly Pointing Camera Array of 64 Cameras
that MAXIMISES the Parallax in the Horizontal Plane, and also MAXIMIZES the Parallax in the Vertical Plane. See, Edit &1
@ Mark Ruff then in post processing, "Pictorially" recombines the Inwardly Pointing (i.e. MAXIMUM Parallax) Camera images with a separate Outwardly Pointing Camera image.
(i.e. MINIMUM Parallax, i.e. Panoramic) For Example, see the Link in Post #286.
@ Mphx #0, in post processing, then uses Agisoft PhotoScan to extract (Inwardly Pointing) Horizontal and Vertical Parallax Image Data to then construct the PhotoScan 3D Mesh. Also see Post 269
Also, Mphx #1, could produce a Agisoft PhotoScan with NO subject (i.e. NO central observer) for the Calibration of the Multi-Camera rig. see below.
A New Approach to Stereo Immersive Capture
By Vincent Chapdelaine-Couture and S´ ebastien Roy
D´ epartement d’Informatique et recherche op´ erationnelle
Universit´ e de Montr´ eal (Qu´ ebec), Canada
"...We introduce in this paper a camera setup for stereo immersive (omnistereo) capture.
An omnistereo pair of images gives stereo information up to 360 degrees around a central observer..."
For, This post, a "central observer" is equivalent to the "Surface" of the PhotoScan 3D Mesh.
2503: EpiPolar Geometry by
A.D. Jepson and D.J. Fleet, 2006, PDF, Page: 1
"...We consider two perspective images of a scene as taken from a stereo pair of cameras
(or equivalently, assume the scene is rigid and imaged with a single camera from two
"...The relationship between such corresponding image points turns out to be both simple and useful..."
@ Mphx #2, Agisoft PhotoScan already Semi-Automatically produces the required, Outwardly Pointing Camera, 3D Mesh Data, its The 3D Position of the Cameras.
@ [AgiSoft]; Re: Bumpy Suface
« Reply #17 on: September 17, 2014, » Hello Brit,
The basic principles of image acquisitions and pre-processing are the following:-
* do not crop original images,
* do not apply geometrical transformations (rotations or deformations) to the images,
* use image frame effectively,
* provide sufficient overlap and coverage of the surface being reconstructed.
* use lower ISO values,
* provide good focusing and sufficient focal depth to acquire sharp images,
* avoid using flash,
* if different focal length are used, make sure that cameras are grouped correctly into calibration groups
(in Tools Menu -> Camera Calibration window).
Masking can be performed semi-automatically [#2] - at first you can generate masks from model and then adjust them manually. [#3]
Best regards, Alexey Pasumansky, AgiSoft LLC
@ [AgiSoft]; Re: Improved masking options for Human Scanning
« Reply #4 on: April 05, 2014
"...It's just as fast to do your editing after dense point cloud reconstruction..."
"...Photoscan will still web but it will produce better results and will be faster than manually editing each mask image by hand..."
"...It doesn't really matter how accurate the masks are in this regard as PhotoScan still doesn't take into account masks during hole fill stage..."
@ Mphx #3, Then a further small amount of [manual] (
) post processing, of the 3D Position Mesh Data of the Cameras is then done by:-
3D Printing and Data Visualisation:-
A Technology Brieﬁng by Paul Bourke iVEC @ UWA
Slides online here: http://paulbourke.net/3dprint2014/
PDF Extract:- "...Geometry: Thickening Surfaces..."
• Solution is called “Rolling Ball” Thickening.
• Imagine a ball rolling across the surface, form an "INTERNAL" [external] mesh along the ball path.
• Implemented in Blender as a modiﬁer called “solidify”.
• Modiﬁers are elegant since they don’t permanently affect the geometry, can be changed later.
@ Andrew (A Re-post is OK), or Anyone; Confused, Comments.
@ PhotoScan Processing Summary and Valid Data:-
* PhotoScaned Images Mapped on to the Exterior Surface of PhotoScaned 3D Mesh.
* PhotoScan Mask is on Interior Surface of the Camera Position 3D Mesh.
* Optional "INTERNAL" Thick PhotoScan Mask is also on Interior Surface of Camera Position 3D Mesh.
* "...PhotoScan will still web..."
* "...PhotoScan still doesn't take into account masks during hole fill stage..."
Automatic Disparity Control in Stereo Panoramas (OmniStereo)
By Yael Pritch Moshe Ben-Ezra Shmuel Peleg
School of Computer Science and Engineering
Hebrew University of Jerusalem 91904, ISRAEL
See:- Figure 1. No arrangement of two single viewpoint images can give stereo in all viewing directions.
For upward viewing the two cameras should be separated horizontally,
and for sideways viewing the two cameras should be separated vertically.