Skip to content

Latest commit

 

History

History
92 lines (68 loc) · 4.39 KB

README_en.md

File metadata and controls

92 lines (68 loc) · 4.39 KB

English | 简体中文

FCOSR

Content

Introduction

FCOSR is one stage anchor-free model based on FCOS. FCOSR focuses on the label assignment strategy for oriented bounding boxes and proposes ellipse center sampling method and fuzzy sample assignment strategy. In terms of loss, FCOSR uses ProbIoU to avoid boundary discontinuity problem.

Model Zoo

Model Backbone mAP Lr Scheduler Angle Aug GPU Number images/GPU download config
FCOSR-M ResNeXt-50 76.62 3x oc RR 4 4 model config

Notes:

  • if GPU number or mini-batch size is changed, learning rate should be adjusted according to the formula lrnew = lrdefault * (batch_sizenew * GPU_numbernew) / (batch_sizedefault * GPU_numberdefault).
  • Models in model zoo is trained and tested with single scale by default. If MS is indicated in the data augmentation column, it means that multi-scale training and multi-scale testing are used. If RR is indicated in the data augmentation column, it means that RandomRotate data augmentation is used for training.

Getting Start

Refer to Data-Preparation to prepare data.

Training

Single GPU Training

CUDA_VISIBLE_DEVICES=0 python tools/train.py -c configs/rotate/fcosr/fcosr_x50_3x_dota.yml

Multiple GPUs Training

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/rotate/fcosr/fcosr_x50_3x_dota.yml

Inference

Run the follow command to infer single image, the result of inference will be saved in output directory by default.

python tools/infer.py -c configs/rotate/fcosr/fcosr_x50_3x_dota.yml -o weights=https://paddledet.bj.bcebos.com/models/fcosr_x50_3x_dota.pdparams --infer_img=demo/P0861__1.0__1154___824.png --draw_threshold=0.5

Evaluation on DOTA Dataset

Refering to DOTA Task, You need to submit a zip file containing results for all test images for evaluation. The detection results of each category are stored in a txt file, each line of which is in the following format image_id score x1 y1 x2 y2 x3 y3 x4 y4. To evaluate, you should submit the generated zip file to the Task1 of DOTA Evaluation. You can run the following command to get the inference results of test dataset:

python tools/infer.py -c configs/rotate/fcosr/fcosr_x50_3x_dota.yml -o weights=https://paddledet.bj.bcebos.com/models/fcosr_x50_3x_dota.pdparams --infer_dir=/path/to/test/images --output_dir=output_fcosr --visualize=False --save_results=True

Process the prediction results into the format required for the official website evaluation:

python configs/rotate/tools/generate_result.py --pred_txt_dir=output_fcosr/ --output_dir=submit/ --data_type=dota10

zip -r submit.zip submit

Deployment

Please refer to the deployment tutorialDeployment

Citations

@article{li2021fcosr,
  title={Fcosr: A simple anchor-free rotated detector for aerial object detection},
  author={Li, Zhonghua and Hou, Biao and Wu, Zitong and Jiao, Licheng and Ren, Bo and Yang, Chen},
  journal={arXiv preprint arXiv:2111.10780},
  year={2021}
}

@inproceedings{tian2019fcos,
  title={Fcos: Fully convolutional one-stage object detection},
  author={Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
  pages={9627--9636},
  year={2019}
}

@article{llerena2021gaussian,
  title={Gaussian Bounding Boxes and Probabilistic Intersection-over-Union for Object Detection},
  author={Llerena, Jeffri M and Zeni, Luis Felipe and Kristen, Lucas N and Jung, Claudio},
  journal={arXiv preprint arXiv:2106.06072},
  year={2021}
}