Skip to content

[ICCV 2023] The first DETR model for monocular 3D object detection with depth-guided transformer

Notifications You must be signed in to change notification settings

ZrrSkywalker/MonoDETR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

61 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection

Official implementation of 'MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection'.

The paper has been accepted by ICCV 2023 🎉.

News

  • [2023-08] A More Stable Version 🌟 of MonoDETR on KITTI is now released! 🔥🔥🔥
  • [2022-04] The initial code of MonoDETR on KITTI is released

Introduction

MonoDETR is the first DETR-based model for monocular 3D detection without additional depth supervision, anchors or NMS. We enable the vanilla transformer in DETR to be depth-guided and achieve scene-level geometric perception. In this way, each object estimates its 3D attributes adaptively from the depth-informative regions on the image, not limited by center-around features.

Main Results

Note that the randomness of training for monocular detection would cause a variance of ±1 AP3D on KITTI.

The official results in the paper:

Models Val, AP3D|R40
Easy Mod. Hard
MonoDETR 28.84% 20.61% 16.38%

New and better results in this repo:

Models Val, AP3D|R40 Logs Ckpts
Easy Mod. Hard
MonoDETR 28.79% 20.83% 17.47% log ckpt
29.36% 20.64% 17.30% log ckpt
27.58% 20.14% 16.98% log ckpt

Installation

  1. Clone this project and create a conda environment:

    git clone https://github.com/ZrrSkywalker/MonoDETR.git
    cd MonoDETR
    
    conda create -n monodetr python=3.8
    conda activate monodetr
    
  2. Install pytorch and torchvision matching your CUDA version:

    conda install pytorch torchvision cudatoolkit
    # We adopt torch 1.9.0+cu111
  3. Install requirements and compile the deformable attention:

    pip install -r requirements.txt
    
    cd lib/models/monodetr/ops/
    bash make.sh
    
    cd ../../../..
    
  4. Make dictionary for saving training losses:

    mkdir logs
    
  5. Download KITTI datasets and prepare the directory structure as:

    │MonoDETR/
    ├──...
    ├──data/KITTIDataset/
    │   ├──ImageSets/
    │   ├──training/
    │   ├──testing/
    ├──...
    

    You can also change the data path at "dataset/root_dir" in configs/monodetr.yaml.

Get Started

Train

You can modify the settings of models and training in configs/monodetr.yaml and indicate the GPU in train.sh:

bash train.sh configs/monodetr.yaml > logs/monodetr.log

Test

The best checkpoint will be evaluated as default. You can change it at "tester/checkpoint" in configs/monodetr.yaml:

bash test.sh configs/monodetr.yaml

Acknowlegment

This repo benefits from the excellent Deformable-DETR and MonoDLE.

Citation

@article{zhang2022monodetr,
  title={MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection},
  author={Zhang, Renrui and Qiu, Han and Wang, Tai and Xu, Xuanzhuo and Guo, Ziyu and Qiao, Yu and Gao, Peng and Li, Hongsheng},
  journal={ICCV 2023},
  year={2022}
}

Contact

If you have any questions about this project, please feel free to contact [email protected].

About

[ICCV 2023] The first DETR model for monocular 3D object detection with depth-guided transformer

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published