This repository is code release for our 3DV 2019 paper (arXiv report here).
If you use our code or data, please cite
@inproceedings{Gross193DV,
author = {Johannes Gro{\ss} and Aljo\u{s}a O\u{s}ep and Bastian Leibe},
title = {AlignNet-3D: Fast Point Cloud Registration of Partially Observed Objects},
booktitle = {International Conference on 3D Vision (3DV)},
year = {2019}
}
If you use the data, please also cite the original dataset:
@inproceedings{Geiger12CVPR,
author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2012}
}
- Install Tensorflow (tested with version 1.8.0)
- Clone Open3D fork from https://github.com/grossjohannes/Open3D
- Build Open3D, build and install the Open3D python package (see http://www.open3d.org/docs/compilation.html)
- Install dependencies with
pip install -r requirements.txt
- Run with Python 3
- Download the datasets:
- Extract the datasets to the same directory, e.g.
/home/gross/data
. The folder structure should look like
data
│
└───SynthCars
│ │
│ └───meta
│ │ │ 00000000.json
│ │ │ 00000001.json
│ │ │ ...
│ │
│ └───pointcloud1
│ │ │ 00000000.npy
│ │ │ 00000001.npy
│ │ │ ...
│ │
│ ...
│
└───SynthCarsPersons
│ ...
- Specify your dataset folder (e.g.
/home/gross/data
) in make_icp_configs.py - Prepare the icp configs by running
python make_icp_configs.py
- Run all icp evaluations at once with
./eval_icp.sh
- Adapt logging.basedir in
configs/default.json
- Run e.g.
python train.py --config configs/SynthCars.json
- Models and evaluation results will be written to the specified logging.basedir
- For models with pre-training from other models, adapt training.pretraining.model in the respective config files (e.g.
configs/KITTITrackletsCars.json
)
- To run the evaluation (again) with an existing model checkpoint, run e.g.
python train.py eval_only --config configs/KITTITrackletsCarsHard.json --eval_epoch 28
- The results will e.g. be in
/home/gross/models/KITTITrackletsCarsHard/val/eval000028/
eval.json
contains the results when the full angle is evaluated,eval_180.json
the evaluation for the predicted angle/flipped angle closest to the ground truth angle
- The results will e.g. be in
- To run the evaluation (again) with already computed inference outputs (pred_translations.npy, pred_angles.npy, ...), run e.g.
python train.py eval_only --config configs/KITTITrackletsCarsHard.json --eval_epoch 28 --use_old_results
- To run the evaluation (again) with ICP refinement, run e.g.
python train.py eval_only --config configs/KITTITrackletsCarsHard.json --eval_epoch 28 --refineICP --use_old_results
- The evaluation results are written to a
refined_p2p
subfolder
- The evaluation results are written to a
- Some trained models and evaluation results can be found in models_alignnet.zip
Our code is released under BSD-3 License (see LICENSE file for details).
- This repository builds upon, thus borrows code heavily from PointNet, Frustum PointNets and DGCNN.