Skip to content

Centralized Feature Pyramid for Object Detection

License

Notifications You must be signed in to change notification settings

NeKoooo233/CFPNet

 
 

Repository files navigation

CFP for Object Detection

This repository contains the official PyTorch implementation of the following paper:

Centralized Feature Pyramid for Object Detection

Yu Quan, Dong Zhang, Liyan Zhang and Jinhui Tang
Computer Science and Engineering, Nanjing University of Science and Technology
https://arxiv.org/abs/2210.02093

Abstract

Visual feature pyramid has shown its superiority in both effectiveness and efficiency in a wide range of applications. However, the existing methods exorbitantly concentrate on the inter-layer feature interactions but ignore the intra-layer feature regulations, which are empirically proved beneficial. Although some methods try to learn a compact intra-layer feature representation with the help of the attention mechanism or the vision transformer, they ignore the neglected corner regions that are important for dense prediction tasks. To address this problem, in this paper, we propose a Centralized Feature Pyramid (CFP) for object detection, which is based on a globally explicit centralized feature regulation. Specifically, we first propose a spatial explicit visual center scheme, where a lightweight MLP is used to capture the globally long-range dependencies and a parallel learnable visual center mechanism is used to capture the local corner regions of the input images. Based on this, we then propose a globally centralized regulation for the commonly-used feature pyramid in a top-down fashion, where the explicit visual center information obtained from the deepest intra-layer feature is used to regulate frontal shallow features. Compared to the existing feature pyramids, CFP not only has the ability to capture the global long-range dependencies, but also efficiently obtain an all-round yet discriminative feature representation. Experimental results on the challenging MS-COCO validate that our proposed CFP can achieve the consistent performance gains on the state-of-the-art YOLOv5 and YOLOX object detection baselines.

The overall architecture

The overall architecture

Qualitative results

Qualitative results

Quantitative results and training weights

We provide training weights of CFP with YOLOX and YOLOv5 as the baseline.

Model size mAP(%) weights
CFP-s (YOLOX) 640 41.10 weight
CFP-m (YOLOX) 640 46.40 weight
CFP-l (YOLOX) 640 49.40 weight
CFP-s (YOLOv5) 640 36.00 weight
CFP-m (YOLOv5) 640 43.20 weight
CFP-l (YOLOv5) 640 46.60 weight

Installation

- Install CFP-main from source

git clone [email protected]:QY1994-0919/CFP-main.git  
cd CFP-main    
pip3 install -v -e .  # or  python3 setup.py develop   

- Prepare COCO dataset

cd CFP-main   
ln -s /path/to/your/COCO ./datasets/COCO   

Usage

- To train the model, please run:

python -m cfp.tools.train -f cfp-s -d 2 -b 16 --fp16 -o [--cache]
python -m cfp.tools.train -f cfp-m -d 2 -b 16 --fp16 -o [--cache]
python -m cfp.tools.train -f cfp-l -d 2 -b 16 --fp16 -o [--cache]

- To test the model, please run:

python -m cfp.tools.eval -n  cfp-s -c cfp_s.pth -b 16 -d 2 --conf 0.001 [--fp16] [--fuse]
python -m cfp.tools.eval -n  cfp-m -c cfp_s.pth -b 16 -d 2 --conf 0.001 [--fp16] [--fuse]
python -m cfp.tools.eval -n  cfp-l -c cfp_s.pth -b 16 -d 2 --conf 0.001 [--fp16] [--fuse]

Acknowledgement

Thanks YOLOv5 and YOLOX teams for the wonderful open source project!

Bibtex

If you find this work is useful for your research, please cite our paper:

@article{quan2022centralized,
title={Centralized Feature Pyramid for Object Detection},
author={Quan, Yu and Zhang, Dong and Zhang, Liyan and Tang, Jinhu},
journal={arXiv},
year={2022}}

About

Centralized Feature Pyramid for Object Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 90.6%
  • C++ 8.5%
  • Other 0.9%