Position Guided Dynamic Receptive Field Network: A Small Object Detection Friendly to Optical and SAR Images
This repository is the official implementation of "Position Guided Dynamic Receptive Field Network: A Small Object Detection Friendly to Optical and SAR Images" at [Please stay tuned!]
The master branch is built on MMRotate which works with PyTorch 1.8+.
PG-DRFNet's train/test configure files are placed under configs/PG-DRFNet/
How to utilize the dynamic perception of PG-DRFNet can be referenced to here.
- CSPNeXt-m: pre-trained checkpoint supported by Openmmlab(link).
- ResNet: pre-trained ResNet50 supported by Pytorch.
Model | mAP | Angle | lr schd | Batch Size | Configs | Download |
---|---|---|---|---|---|---|
RTMDet-M | 57.71 | le90 | 6x | 8 | - | model |
PG-DRFNet | 59.01 | le90 | 6x | 8 | pg_drfnet-6x-dota2 | model | log |
*NOTE: Since the DOTA-V2.0 set requires online testing, the mAP shown in log is used as a reference by randomly verifying the 20% of training set. We submitted the final model online for testing at DOTA.
Model | mAP | Angle | lr schd | Batch Size | Configs | Download |
---|---|---|---|---|---|---|
PG-DRFNet | 84.06 | le90 | 12x | 4 | pg_drfnet-12x-vedai | model |log |
For example, when dataset is DOTA2.0 and method is PG-DRFNet, you can train by running the following
python tools/train.py \
--config configs/PG-DRFNet/pg_drfnet-6x-dota2.py \
--work-dir work_dirs/PG-DRFNet \
--load_from path/to/pre-trained/model \
and if you want submit the DOTA2.0 results for online evaluation, you can run as follows
python tools/test.py \
--config configs/PG-DRFNet/pg_drfnet-6x-dota2.py \
--checkpoint path/to/PG-DRFNet/model.pth \
--cfg-options test_dataloader.dataset.ann_file='' test_dataloader.dataset.data_prefix.img_path=test/images/ test_evaluator.format_only=True test_evaluator.merge_patches=True test_evaluator.outfile_prefix='path/to/save_dir'
Detailed hyperparameters config can be found in configs/base/ and configs/PG-DRFNet/
MMRotate depends on PyTorch, MMCV and MMDetection. Below are quick steps for installation. Please refer to Install Guide for more detailed instruction.
conda create --name openmmlab python=3.8 -y
conda activate openmmlab
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch
pip install -U openmim
mim install mmcv-full
mim install mmdet
git clone https://github.com/Qian-CV/PG-DRFNet.git
cd PG-DRFNet
pip install -v -e .
Please see here for the basic usage of MMRotate. We also provide some tutorials for:
The code is developed based on the following repositories. We appreciate their nice implementations.
Method | Repository |
---|---|
RTMDet | https://github.com/open-mmlab/mmdetection |
RTMDet-R | https://github.com/open-mmlab/mmrotate |
ECANet | https://github.com/BangguWu/ECANet |
QFocal | https://github.com/implus/GFocal |
If you use this software in your work, please cite it using the following metadata. Liuqian Wang, Jing Zhang, et. al. (2024). PG-DRFNet by BJUT-AI&VBD [Computer software]. https://github.com/BJUT-AIVBD/PG-DRFNet