IDA-3D: Instance-Depth-Aware 3D Object Detection from Stereo Vision for Autonomous Driving (CVPR2020)
This repository is the code for our CVPR2020 paper
This implementation is based on the maskrcnn-benchmark, and you can check INSTALL.md for installation instructions.
We tested this code under python 3.7, PyTorch 1.1.0, CUDA 10.0 on Ubuntu 18.04. We also provide the off-the-shelf running environment based on Singularity and Anaconda. You can download directly from here and here and run the following commands.
unzip env_IDA3D.zip -d ~/anaconda3/envs/
# Activating Singularity and Anaconda environment
singularity shell --nv ubutu18-cuda10.simg
source ~/annconda3/bin/activate tdrcnn
# Installing apex
git clone https://github.com/NVIDIA/apex.git
cd apex
python setup.py install --cuda_ext --cpp_ext
# Note:The latest version of apex may not be compatible with our environment and you can download the old version from https://drive.google.com/file/d/1P-ym84EOlAqDS1tb1a0SseIM5ljpxzyC/view?usp=sharing
# Installing PyTorch Detection
git clone https://github.com/swords123/IDA-3D.git
cd IDA-3D
python setup.py build develop
We provide experiments on KITTI-3D benchmark. You can directly download our processed dataset and place it into IDA-3D/datasets/
. The data folder should be in the following format:
datasets
└── kitti
├── calib
├── image_2
├── image_3
├── label_2
├── label_3d
└── splits
For "image_2" and "image_3", we flip the images in the training set and exchange the left and right image for data augmentation, you should do it as:
im_left_flip = cv2.flip(im_right,1) #image_right: images in "image_3"
im_right_flip = cv2.flip(im_left, 1) #image_left: images in "image_2"
cv2.imwrite(os.path.join('image_2', index + '_flip.png'), im_left_flip)
cv2.imwrite(os.path.join('image_3', index + '_flip.png'), im_right_flip)
Activating Singularity and Anaconda environment, setting corresponding parameters from ./train.sh
and simply running:
./train.sh
We provide the pretrained models for car category and you can download from here. You can evaluate the performance using either our provided model or your trained model by setting correspondingand parameters from ./test.sh
and simply run:
./test.sh
Then, generating standard test files.
cd tools
python generate_results.py
Finally, running the evalutaion by:
./kitti_eval/evaluate_object datasets/kitti/label_2 self_exp/exp_1/kitti_test/result_xxx
This repo is built based on the maskrcnn-benchmark.
If you find this work useful for your research, please consider citing our paper:
@InProceedings{Peng_2020_CVPR,
author = {Peng, Wanli and Pan, Hao and Liu, He and Sun, Yi},
title = {IDA-3D: Instance-Depth-Aware 3D Object Detection From Stereo Vision for Autonomous Driving},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
Our code is released under MIT license.