Skip to content

AsahiLiu/PointDetectron

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Point Detectron

Created by Xu Liu, from JD AI Research and The University of Tokyo.

teaser

Introduction

This repository is code release for our NeurIPS 2020 paper Group Contextual Encoding for 3D Poit Clouds (Online Paper here) and 3DV 2020 paper Dense Point Diffusion for 3D Detection (arXiv report here)

This repository is built on the VoteNet, we empower VoteNet model with Group Contextual Encoding Block, Dense Point Diffusion modules as well as the Dilated Point Convolution.

Citation

@article{liu2020group, title={Group Contextual Encoding for 3D Point Clouds}, author={Liu, Xu and Li, Chengtao and Wang, Jian and Wang, Jingbo and Shi, Boxin and He, Xiaodong}, journal={Advances in Neural Information Processing Systems}, volume={33}, year={2020} }

Installation

Install Pytorch and Tensorflow (for TensorBoard). It is required that you have access to GPUs. Matlab is required to prepare data for SUN RGB-D. The code is tested with Ubuntu 18.04, Pytorch v1.1, TensorFlow v1.14, CUDA 10.0 and cuDNN v7.4. Note: there is some incompatibility with newer version of Pytorch (e.g. v1.3), which is to be fixed.

Compile the CUDA layers for PointNet++, which we used in the backbone network:

cd pointnet2
python setup.py install

To see if the compilation is successful, try to run python models/votenet.py to see if a forward pass works.

Install the following Python dependencies (with pip install):

matplotlib
opencv-python
torch-encoding
plyfile
'trimesh>=2.35.39,<2.35.40'

Run demo

Following VoteNet, you can run the demo with the pretrained models under the project root path (/path/to/project/demo_files) and then run:

python demo.py

The demo uses a pre-trained model (on SUN RGB-D) to detect objects in a point cloud from an indoor room of a table and a few chairs (from SUN RGB-D val set). You can use 3D visualization software such as the MeshLab to open the dumped file under demo_files/sunrgbd_results to see the 3D detection output. Specifically, open ***_pc.ply and ***_pred_confident_nms_bbox.ply to see the input point cloud and predicted 3D bounding boxes.

You can also run the following command to use another pretrained model on a ScanNet:

python demo.py --dataset scannet --num_point 40000

Detection results will be dumped to demo_files/scannet_results.

Training and evaluating

Data preparation

Please follow the instructions of VoteNet to prepare for the datasets.

For SUN RGB-D, follow the README under the sunrgbd folder.

For ScanNet, follow the README under the scannet folder.

Train and test on SUN RGB-D

To train a new model ${MODEL_CONFIG} in the MODEL ZOO on SUN RGB-D data (depth images):

CUDA_VISIBLE_DEVICES=0 python train.py --dataset sunrgbd --log_dir log_sunrgbd --model ${MODEL_CONFIG}

You can use CUDA_VISIBLE_DEVICES=0,1,2 to specify which GPU(s) to use. Without specifying CUDA devices, the training will use all the available GPUs and train with data parallel (Note that due to I/O load, training speedup is not linear to the nubmer of GPUs used). While training you can check the log_sunrgbd/log_train.txt file on its progress, or use the TensorBoard to see loss curves.

To test the trained model with its checkpoint:

python eval.py --dataset sunrgbd --checkpoint_path log_sunrgbd/checkpoint.tar --dump_dir eval_sunrgbd --cluster_sampling seed_fps --use_3d_nms --use_cls_nms --per_class_proposal  --model  ${MODEL_CONFIG}

Example results will be dumped in the eval_sunrgbd folder (or any other folder you specify). You can run python eval.py -h to see the full options for evaluation. After the evaluation, you can use MeshLab to visualize the predicted votes and 3D bounding boxes (select wireframe mode to view the boxes). Final evaluation results will be printed on screen and also written in the log_eval.txt file under the dump directory. In default we evaluate with both [email protected] and [email protected] with 3D IoU on oriented boxes.

Train and test on ScanNet

To train a model ${MODEL_CONFIG} in the MODEL ZOO on Scannet data (fused scan):

CUDA_VISIBLE_DEVICES=0 python train.py --dataset scannet --log_dir log_scannet --num_point 40000 --model  ${MODEL_CONFIG}

To test the trained model with its checkpoint:

python eval.py --dataset scannet --checkpoint_path log_scannet/checkpoint.tar --dump_dir eval_scannet --num_point 40000 --cluster_sampling seed_fps --use_3d_nms --use_cls_nms --per_class_proposal --model  ${MODEL_CONFIG}

Example results will be dumped in the eval_scannet folder (or any other folder you specify).

MODEL ZOO

MODEL SPECS $ {MODEL_CONFIG} SUN-RGBD ScanNet
Group Contextual Ecoding (K=8, G=12, C×3) votenet_enc_FP2_K8_G12_C3 60.7 60.8
SA2 - Dense Point Diffusion (3,6,12) votenet_SA2_denseaspp3_6_12 58.6 59.6
SA2 - Dense Point Diffusion (3,6) votenet_SA2_denseaspp3_6 58.7 58.9
VoteNet votenet (default) 57.7 58.6

The ablation models in the papers can be derived from the models listed above, therefore, we did not list them all.

Train on your own data

[For Pro Users] If you have your own dataset with point clouds and annotated 3D bounding boxes, you can create a new dataset class and train VoteNet on your own data. To ease the proces, some tips are provided in this doc.

Acknowledgements

We want to thank Charles Qi for his VoteNet (original codebase), Hang Zhang for his EncNet (original codebase) and Erik Wijmans for his PointNet++ implementation in Pytorch (original codebase).

License

votenet is relased under the MIT License. See the LICENSE file for more details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published