Skip to content
cDc edited this page May 11, 2020 · 22 revisions

Next a usage example of the available modules is presented. For this we used the Sceaux Castle images and OpenMVG pipeline to recover camera positions and the sparse point-cloud. Please note that all output presented here is the original output obtained automatically by the OpenMVS pipeline, with no manual manipulation of the results. The complete example (including Windows x64 binary for the modules) can be found at OpenMVS_sample.

All OpenMVS binaries support some command line parameters, which are explained in detail if executed with no parameters or with -h.

@FlachyJoe contributed with a script which automates the process of running OpenMVG and OpenMVS in a single command line. Same results as below can be obtained by running:

python MvgMvsPipeline.py <images_folder> <output_folder>

Convert scene from OpenMVG

After reconstructing the scene, OpenMVG will generate by default the sfm_data.bin file containing camera poses and the sparse point-cloud. Using the exporter tool provided by OpenMVG, we convert it to the OpenMVS project scene.mvs:

openMVG_main_openMVG2openMVS -i sfm_data.bin -o scene.mvs -d scene_undistorted_images

The directory made with the -d switch will store the undistorted images.

A typical sparse point-cloud obtained by the previous steps will look like this:

sparse point-cloud

Viewer module can be used to visualize any MVS project file or PLY/OBJ file. The viewer expects the input file either on the command line or to drag & drop it inside the viewer window. Viewer is used to create all the screenshots below.

The output of each OpenMVS module is displayed by default both on the console and stored in a LOG file. Example of the generated LOG files can also be found at OpenMVS_sample.

Dense Point-Cloud Reconstruction (optional)

If scene parts are missing, the dense reconstruction module can recover them by estimating a dense point-cloud, employing by default a Patch-Match approach:

DensifyPointCloud scene.mvs

The obtained dense point-cloud (please note the vertex colors are roughly estimated only for visualization, they do not contribute farther down the pipeline):

dense point-cloud

Alternatively, the dense reconstruction module can estimate a dense point-cloud using Semi-Global Matching (SGM), in two steps: fist estimating disparity-maps between all valid image pairs, followed by a second step fusing them in the final point-cloud:

DensifyPointCloud scene.mvs --fusion-mode -1
DensifyPointCloud scene.mvs --fusion-mode -2

Rough Mesh Reconstruction

The sparse or dense point-cloud obtained in the previous steps is used as the input of the mesh reconstruction module:

ReconstructMesh scene_dense.mvs

The obtained mesh:

rough mesh

Mesh Refinement (optional)

The mesh obtained either from the sparse or dense point-cloud can be further refined to recover all fine details or even bigger missing parts. Next the rough mesh obtained only from the sparse point-cloud is refined:

RefineMesh scene_mesh.mvs

The mesh before and after refinement:

rough mesh refined mesh

Similarly, the rough mesh obtained from the dense point-cloud can be refined:

RefineMesh scene_dense_mesh.mvs --max-face-area 16

The mesh before and after refinement:

rough mesh refined mesh

Mesh Texturing

The mesh obtained in the previous steps is used as the input of the mesh texturing module:

TextureMesh scene_dense_mesh_refine.mvs

The obtained mesh plus texture:

textured mesh

Note that the triangles textured in orange (default) are not visible in any of the input images, and can be colored differently or removed.

Exporting and Viewing Results

Each of the above commands also writes a PLY file that can be used with many third-party tools. Alternatively, Viewer can be used to export the MVS projects to PLY or OBJ formats.

Clone this wiki locally