Skip to content

Latest commit

 

History

History

DETR

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

End-to-End Object Detection with Transformers, arxiv

PaddlePaddle training/validation code and pretrained models for DETR.

The official pytorch implementation is here.

This implementation is developed by PaddleViT.

drawing

DETR Model Overview

Update

  • Update (2022-01-21): Code is updated and ported weights are uploaded.
  • Update (2022-01-07): Code is refactored and ported weights are changed, new weights are coming soon.
  • Update (2021-09-01): Code is released and ported weights are uploaded.

Models Zoo

Model backbone box_mAP Model
DETR ResNet50 42.0 google/baidu(uamk)
DETR ResNet101 43.5 google/baidu(2veg)

*The results are evaluated on COCO validation set.

Notebooks

We provide a few notebooks in aistudio to help you get started:

*(coming soon)*

Requirements

Data

COCO2017 dataset is used in the following folder structure:

COCO dataset folder
├── annotations
│   ├── captions_train2017.json
│   ├── captions_val2017.json
│   ├── instances_train2017.json
│   ├── instances_val2017.json
│   ├── person_keypoints_train2017.json
│   └── person_keypoints_val2017.json
├── train2017
│   ├── 000000000009.jpg
│   ├── 000000000025.jpg
│   ├── 000000000030.jpg
│   ├── 000000000034.jpg
|   ...
└── val2017
    ├── 000000000139.jpg
    ├── 000000000285.jpg
    ├── 000000000632.jpg
    ├── 000000000724.jpg
    ...

More details about the COCO dataset can be found here and COCO official dataset.

Usage

To use the model with pretrained weights, download the .pdparam weight file and change related file paths in the following python scripts. The model config files are located in ./configs/.

For example, assume the downloaded weight file is stored in ./detr_resnet50.pdparams, to use the detr model in python:

from config import get_config
from detr import build_detr
# config files in ./configs/
config = get_config('./configs/detr_resnet50.yaml')
# build model
model, critertion, postprocessors = build_detr(config)
# load pretrained weights
model_state_dict = paddle.load('./detr_resnet50.pdparams')
model.set_dict(model_state_dict)

Evaluation

To evaluate DETR model performance on COCO2017 with a single GPU, run the following script using command line:

sh run_eval.sh

or

CUDA_VISIBLE_DEVICES=0 \
python main_single_gpu.py \
    -cfg=./configs/detr_resnet50.yaml \
    -dataset=coco \
    -batch_size=4 \
    -data_path=/path/to/dataset/coco/val \
    -eval \
    -pretrained=/path/to/pretrained/model/detr_resnet50  # .pdparams is NOT needed
Run evaluation using multi-GPUs:
sh run_eval_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
    -cfg=./configs/detr_resnet50.yaml \
    -dataset=coco \
    -batch_size=4 \
    -data_path=/path/to/dataset/coco/val \
    -eval \
    -pretrained=/path/to/pretrained/model/detr_resnet50  # .pdparams is NOT needed

Training

To train the DETR model on COCO2017 with single GPU, run the following script using command line:

sh run_train.sh

or

CUDA_VISIBLE_DEVICES=1 \
python main_single_gpu.py \
    -cfg=./configs/detr_resnet50.yaml \
    -dataset=coco \
    -batch_size=2 \
    -data_path=/path/to/dataset/coco/train
Run training using multi-GPUs (coming soon):
sh run_train_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
    -cfg=./configs/detr_resnet50.yaml \
    -dataset=coco \
    -batch_size=2 \
    -data_path=/path/to/dataset/coco/train

Visualization

coming soon

Reference

@inproceedings{carion2020end,
  title={End-to-end object detection with transformers},
  author={Carion, Nicolas and Massa, Francisco and Synnaeve, Gabriel and Usunier, Nicolas and Kirillov, Alexander and Zagoruyko, Sergey},
  booktitle={European Conference on Computer Vision},
  pages={213--229},
  year={2020},
  organization={Springer}
}