Skip to content

This repository is a development version of "Position Guided Dynamic Receptive Field Network: A Small Object Detection Friendly to Optical and SAR Images". The official version has released at: https://github.com/BJUT-AIVBD/PG-DRFNet

Notifications You must be signed in to change notification settings

Qian-CV/PG-DRFNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Position Guided Dynamic Receptive Field Network: A Small Object Detection Friendly to Optical and SAR Images

Introduction

This repository is the official implementation of "Position Guided Dynamic Receptive Field Network: A Small Object Detection Friendly to Optical and SAR Images" at [Please stay tuned!]

The master branch is built on MMRotate which works with PyTorch 1.8+.

PG-DRFNet's train/test configure files are placed under configs/PG-DRFNet/

How to utilize the dynamic perception of PG-DRFNet can be referenced to here.

Deep Learning Experiments

Source of Pre-trained models

  • CSPNeXt-m: pre-trained checkpoint supported by Openmmlab(link).
  • ResNet: pre-trained ResNet50 supported by Pytorch.

Results and models

1. DOTA-V2.0

Model mAP Angle lr schd Batch Size Configs Download
RTMDet-M 57.71 le90 6x 8 - model
PG-DRFNet 59.01 le90 6x 8 pg_drfnet-6x-dota2 model | log

*NOTE: Since the DOTA-V2.0 set requires online testing, the mAP shown in log is used as a reference by randomly verifying the 20% of training set. We submitted the final model online for testing at DOTA.

2. VEDAI

Model mAP Angle lr schd Batch Size Configs Download
PG-DRFNet 84.06 le90 12x 4 pg_drfnet-12x-vedai model |log

For example, when dataset is DOTA2.0 and method is PG-DRFNet, you can train by running the following

python tools/train.py \
  --config configs/PG-DRFNet/pg_drfnet-6x-dota2.py \
  --work-dir work_dirs/PG-DRFNet \
  --load_from path/to/pre-trained/model \

and if you want submit the DOTA2.0 results for online evaluation, you can run as follows

python tools/test.py \
  --config configs/PG-DRFNet/pg_drfnet-6x-dota2.py \
  --checkpoint path/to/PG-DRFNet/model.pth \
  --cfg-options test_dataloader.dataset.ann_file=''  test_dataloader.dataset.data_prefix.img_path=test/images/ test_evaluator.format_only=True test_evaluator.merge_patches=True test_evaluator.outfile_prefix='path/to/save_dir'

Hyperparameters Configuration

Detailed hyperparameters config can be found in configs/base/ and configs/PG-DRFNet/

Installation

MMRotate depends on PyTorch, MMCV and MMDetection. Below are quick steps for installation. Please refer to Install Guide for more detailed instruction.

conda create --name openmmlab python=3.8 -y
conda activate openmmlab
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch
pip install -U openmim
mim install mmcv-full
mim install mmdet
git clone https://github.com/Qian-CV/PG-DRFNet.git
cd PG-DRFNet
pip install -v -e .

Get Started

Please see here for the basic usage of MMRotate. We also provide some tutorials for:

Acknowledgments

The code is developed based on the following repositories. We appreciate their nice implementations.

Method Repository
RTMDet https://github.com/open-mmlab/mmdetection
RTMDet-R https://github.com/open-mmlab/mmrotate
ECANet https://github.com/BangguWu/ECANet
QFocal https://github.com/implus/GFocal

Cite this repository

If you use this software in your work, please cite it using the following metadata. Liuqian Wang, Jing Zhang, et. al. (2024). PG-DRFNet by BJUT-AI&VBD [Computer software]. https://github.com/BJUT-AIVBD/PG-DRFNet

About

This repository is a development version of "Position Guided Dynamic Receptive Field Network: A Small Object Detection Friendly to Optical and SAR Images". The official version has released at: https://github.com/BJUT-AIVBD/PG-DRFNet

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages