First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.
You can create an anaconda environment called conv_onet
using
conda env create -f environment.yaml
conda activate conv_onet
Note: you might need to install torch-scatter mannually following the official instruction:
pip install torch-scatter==2.0.4 -f https://pytorch-geometric.com/whl/torch-1.4.0+cu101.html
Next, compile the extension modules. You can do this via
python setup.py build_ext --inplace
First, run the script to get the demo data:
bash scripts/download_demo_data.sh
You can then test on our synthetic room dataset by running:
python generate.py configs/pointcloud/demo_syn_room.yaml
To evaluate a pretrained model or train a new model from scratch, you have to obtain the respective dataset. In this paper, we consider the following dataset:
For scene-level reconstruction, we create a synthetic dataset of 5000 scenes with multiple objects from ShapeNet (chair, sofa, lamp, cabinet, table). There are also ground planes and randomly sampled walls.
You can download our preprocessed data (144 GB) using
bash scripts/download_data.sh
This script should download and unpack the data automatically into the data/synthetic_room_dataset
folder.
Note: We also provide point-wise semantic labels in the dataset, which might be useful.
When you have installed all binary dependencies and obtained the preprocessed data, you are ready to run our pre-trained models and train new models from scratch.
To generate meshes using a trained model, use
python generate.py configs/pointcloud/room_3plane_vae.yaml
For evaluation of the models, we provide the script similarity.py
to find the most similar mesh using chamfer distance in the training data. You can run it using:
python similarity.py target_mesh source_meshes
Finally, for the training of first stage, i.e. the joint model of Conv Onet and VAE, run:
python train.py configs/pointcloud/room_3plane_vae.yaml
For available training options, please take a look at configs/default.yaml
.
And for the training of second stage, i.e. the training of Latent Diffusion Model, run:
python diffusion model/train_diff.py diffusion model/diff.yaml
For available training options, please take a look at diffusion model/default.yaml
.
We adapt code from:
Convolutional Occupancy Networks https://github.com/autonomousvision/convolutional_occupancy_networks (for PointNet encoder)
Diffusion-SDF https://github.com/princeton-computational-imaging/Diffusion-SDF (for VAE model)
DALLE2-pytorch https://github.com/lucidrains/DALLE2-pytorch (for Diffusion model)