-
-
Notifications
You must be signed in to change notification settings - Fork 912
Usage
Next an usage example of the available modules is presented. For this we used the Sceaux Castle images and OpenMVG pipeline to recover camera positions and the sparse point-cloud. Please note that all output presented here is the original output obtained automatically by the OpenMVS pipeline, with no manual manipulation of the results. The complete example (including Windows x64 binary for the modules) can be found at OpenMVS_sample.
After reconstructing the scene, OpenMVG will generate by default the sfm_data.bin
file containing camera poses and the sparse point-cloud. Using the exporter tool, we convert it to the OpenMVS project scene.mvs
:
openMVG_main_openMVG2openMVS -i sfm_data.bin -o scene.mvs
To import a scene reconstructed by OpenMVG using the old .json
ASCII format run the following:
openMVG_main_openMVG2openMVS -i scene.json -o scene.mvs
A typical sparse point-cloud obtained by the previous steps will look like this:
If there are missing scene parts, the dense reconstruction module can recover them by estimating a dense point-cloud:
DensifyPointCloud scene.mvs
The obtained dense point-cloud (please note the vertex colors are roughly estimated only for visualization, they do not contribute farther down the pipeline):
The sparse or dense point-cloud obtained in the previous steps is used as the input of the mesh reconstruction module:
ReconstructMesh scene_dense.mvs
The obtained mesh:
The mesh obtained either from the sparse or dense point-cloud can be further refined to recover all fine details or even bigger missing parts. Next the rough mesh obtained only from the sparse point-cloud is refined:
RefineMesh scene_mesh.mvs
The mesh before and after refinement:
The mesh obtained in the previous steps is used as the input of the mesh texturing module:
TextureMesh scene_dense_mesh.mvs
The obtained mesh plus texture: