Skip to content

prs-eth/PCAccumulation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository represents the official implementation of the ECCV2022 paper:

Shengyu Huang, Zan Gojcic, Jiahui Huang, Andreas Wieser, Konrad Schindler
| ETH Zurich | NVIDIA Toronto AI Lab | BRCist |

Contact

If you have any questions, please let me know:

Instructions

This code has been tested on:

  • Python 3.10.4, PyTorch 1.12.0+cu116, CUDA 11.6, gcc 11.2.0, GeForce RTX 3090
  • Python 3.8.3, PyTorch 1.10.2+cu111, CUDA 11.1, gcc 9.4.0, GeForce RTX 3090

Requirements

Please adjust according to your cuda version, then run the following to create a virtual environment:

virtualenv venv_pcaccumulation
source venv_pcaccumulation/bin/activate
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
pip install --upgrade git+https://github.com/mit-han-lab/[email protected]
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.12.0+cu116.html
pip install pyfilter nestargs

Then clone our repository by running:

git clone https://github.com/prs-eth/PCAccumulation.git
cd PCAccumulation

Datasets and pretrained models

We provide preprocessed Waymo and nuScenes datasets. The preprocessed dataset and checkpoint can be downloaded by running:

wget --no-check-certificate --show-progress https://share.phys.ethz.ch/~gsg/PCAccumulation/data.zip
unzip data.zip
wget --no-check-certificate --show-progress https://share.phys.ethz.ch/~gsg/PCAccumulation/checkpoints.zip
unzip checkpoints.zip

Evaluation

Val

To quickly run a sanity check of the data, code, and checkpoints on validation split, please run

python main.py configs/waymo/waymo.yaml 10 1 --misc.mode=val --misc.pretrain=checkpoints/waymo.pth --path.dataset_base_local=$YOUR_DATASET_FOLDER

or

python main.py configs/nuscene/nuscene.yaml 10 1 --misc.mode=val --misc.pretrain=checkpoints/nuscene.pth --path.dataset_base_local=$YOUR_DATASET_FOLDER

You will see the evaluation metrics like the following:

Successfully load pretrained model from checkpoints/nuscene.pth at epoch 77!
Current best loss 1.3937173217204215
Current best metric 0.8779626780515821
val Epoch: 0	mos_iou: 0.880	mos_recall: 0.942	mos_precision: 0.930	fb_iou: 0.856	fb_recall: 0.918	fb_precision: 0.918	ego_l1_loss: 0.161	ego_l2_loss: 0.119	ego_rot_error: 0.227	ego_trans_error: 0.100	perm_loss: 0.010	fb_loss: 0.341	mos_loss: 0.401	offset_loss: 0.329	offset_l1_loss: 0.531	offset_dir_loss: 0.127	offset_l2_error: 0.436	obj_loss: 0.139	inst_l2_error: 0.214	dynamic_inst_l2_error: 0.268	loss: 1.378	
static:  IoU: 0.929,  Recall: 0.954,  Precision: 0.972 
dynamic:  IoU: 0.832,  Recall: 0.93,  Precision: 0.887 
background:  IoU: 0.974,  Recall: 0.987,  Precision: 0.987 
foreground:  IoU: 0.737,  Recall: 0.849,  Precision: 0.849 

in snapshot/nuscene/log

Test

To evaluate on the held-out test set, please run

python main.py configs/waymo/waymo.yaml 1 1 --misc.mode=test --misc.pretrain=checkpoints/waymo.pth --path.dataset_base_local=$YOUR_DATASET_FOLDER

This will save per-scene flow estimation/errors to results/waymo. Next, please run the following script to get final evaluation:

python toolbox/evaluation.py results/waymo waymo

Citation

If you find this code useful for your work or use it in your project, please consider citing:

@inproceedings{huang2022accumulation,
  title={Dynamic 3D Scene Analysis by Point Cloud Accumulation},
  author={Shengyu Huang and Zan Gojcic and Jiahui Huang and Andreas Wieser, Konrad Schindler},
  booktitle={European Conference on Computer Vision, ECCV},
  year={2022}
}

Acknowledgements

In this project we use (parts of) the following repositories:

We thank the respective developers for open sourcing and maintenance. We would also like to thank reviewers 1 & 2 for their valuable inputs.

Releases

No releases published

Packages

No packages published

Languages