This is the code for SPFlowNet, a self-supervised 3D scene flow estimation framework from 3D point clouds with dynamic superpoints. The code is created by Yaqi Shen ([email protected]).
This repository contains the source code and pre-trained models for SPFlowNet (published on CVPR 2023).
Our model is trained and tested under:
- Python 3.6.6
- CUDA 10.1
- Pytorch (torch == 1.4.0)
- scipy
- tqdm
- sklearn
- numba
- cffi
- yaml
- h5py
Build the ops. We use the operation from this repo.
cd lib/pointops && python setup.py install && cd ../../
FlyingThings3D
For fair comparison with previous methods, we adopt the preprocessing steps in HPLFlowNet.
Download and unzip the "Disparity", "Disparity Occlusions", "Disparity change", "Optical flow", "Flow Occlusions" for DispNet/FlowNet2.0 dataset subsets from the FlyingThings3D website (we used the paths from this file, now they added torrent downloads)
. They will be upzipped into the same directory, RAW_DATA_PATH
. Then run the following script for 3D reconstruction:
python data_preprocess/process_flyingthings3d_subset.py --raw_data_path RAW_DATA_PATH --save_path SAVE_PATH/FlyingThings3D_subset_processed_35m --only_save_near_pts
This dataset is denoted FT3Ds in our paper.
KITTI Scene Flow 2015
- KITTIs dataset
For fair comparison with previous methods, we adopt the preprocessing steps in HPLFlowNet.
Download and unzip KITTI Scene Flow Evaluation 2015 to directory RAW_DATA_PATH
.
Run the following script for 3D reconstruction:
python data_preprocess/process_kitti.py RAW_DATA_PATH SAVE_PATH/KITTI_processed_occ_final
This dataset is denoted KITTIs in our paper.
- KITTIo dataset
For fair comparison with previous methods, we adopt the preprocessing steps in FlowNet3D.
Download and unzip data processed by FlowNet3D to directory SAVE_PATH
. This dataset is denoted KITTIo in our paper.
- KITTIr dataset
Following RigidFlow, we also use raw data from KITTI for self-supervised scene flow learning. The unlabeled training data provided by RigidFlow can be found in here. This dataset is denoted KITTIr in our paper.
Here are some demo results in KITTI dataset:
Train on non-occluded data
Set data_root
in config_train_FT3D.yaml
to SAVE_PATH
in the data preprocess section. Then run
python train_FT3D.py ./configs_without_occlusions/config_train_FT3D.yaml
Train on occluded data
Similarly, specify data_root
in config_train_KITTI_r.yaml
. Then run
python train_KITTI_r.py ./configs_with_occlusions/config_train_KITTI_r.yaml
We upload our pretrained models in pretrained
.
Evaluate on non-occluded data
Set data_root
in config_evaluate.yaml
to SAVE_PATH
in the data preprocess section, and specify dataset
in the configuration file config_evaluate.yaml
. Then run
python evaluate.py ./configs_without_occlusions/config_evaluate.yaml
Evaluate on occluded data
Set data_root
and dataset
in the configuration file config_evaluate_occ.yaml
. Then run
python evaluate_occ.py ./configs_with_occlusions/config_evaluate_occ.yaml
We redefine the encoding of current iteration information in our flow refinement module, which can achieve better experimental results (model_v2.py
).
EPE | AS | AR | Out | pre-trained | |
---|---|---|---|---|---|
SPFlowNet | 0.0606 | 68.34 | 90.74 | 38.76 | SPFlowNet_without_occ |
SPFlowNet-V2 | 0.0532 | 73.54 | 92.61 | 33.91 | SPFlowNet_without_occ_v2 |
EPE | AS | AR | Out | pre-trained | |
---|---|---|---|---|---|
SPFlowNet | 0.0362 | 87.24 | 95.79 | 17.71 | SPFlowNet_without_occ |
SPFlowNet-V2 | 0.0316 | 89.49 | 96.58 | 15.83 | SPFlowNet_without_occ_v2 |
EPE | AS | AR | Out | pre-trained | |
---|---|---|---|---|---|
SPFlowNet | 0.086 | 61.1 | 82.4 | 39.1 | SPFlowNet_with_occ |
SPFlowNet-V2 | 0.070 | 71.7 | 86.2 | 32.1 | SPFlowNet_with_occ_v2 |
If you find our work useful in your research, please consider citing:
@inproceedings{shen2023self,
title={Self-Supervised 3D Scene Flow Estimation Guided by Superpoints},
author={Shen, Yaqi and Hui, Le and Xie, Jin and Yang, Jian},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5271--5280},
year={2023}
}
Our code refers to flownet3d_pytorch, FLOT, PointPWC, HPLFlowNet, FlowStep3D, RigidFlow, and SPNet. We want to thank the above open-source projects.