Wencan Cheng and Jong Hwan Ko
Our model is trained and tested under:
- Python 3.6.9
- NVIDIA GPU + CUDA CuDNN
- PyTorch (torch == 1.6.0)
- scipy
- tqdm
- sklearn
- numba
- cffi
- pypng
- pptk
- thop
Please follow this repo or the instructions below for compiling the furthest point sampling, grouping and gathering operation for PyTorch.
cd pointnet2
python setup.py install
cd ../
We adopt the equivalent preprocessing steps in HPLFlowNet and PointPWCNet.
We copy the preprocessing instructions here for your convinience.
- FlyingThings3D:
Download and unzip the "Disparity", "Disparity Occlusions", "Disparity change", "Optical flow", "Flow Occlusions" for DispNet/FlowNet2.0 dataset subsets from the FlyingThings3D website (we used the paths from this file, now they added torrent downloads)
. They will be upzipped into the same directory,
RAW_DATA_PATH
. Then run the following script for 3D reconstruction:
python3 data_preprocess/process_flyingthings3d_subset.py --raw_data_path RAW_DATA_PATH --save_path SAVE_PATH/FlyingThings3D_subset_processed_35m --only_save_near_pts
- KITTI Scene Flow 2015
Download and unzip KITTI Scene Flow Evaluation 2015 to directory
RAW_DATA_PATH
. Run the following script for 3D reconstruction:
python3 data_preprocess/process_kitti.py RAW_DATA_PATH SAVE_PATH/KITTI_processed_occ_final
Set data_root
in the configuration file to SAVE_PATH
in the data preprocess section before evaluation.
We provide pretrained model in pretrain_weights
.
Please run the following instrcutions for evaluating.
python3 evaluate_bid_pointconv.py config_evaluate_bid_pointconv.yaml
If you need a newly trained model, please set data_root
in the configuration file to SAVE_PATH
in the data preprocess section before evaluation at the first. Then excute following instructions.
python3 train_bid_pointconv.py config_train_bid_pointconv.yaml
We thank repo for the corase-to-fine framework.