Skip to content
forked from BangguWu/ECANet

Code for ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks

License

Notifications You must be signed in to change notification settings

Charles-931/ECANet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ECA-Net: Efficient Channel Attention

ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks

This is an implementation of ECA-Net(CVPR2020,paper), created by Banggu Wu.

Poster

Introduction

Recently, channel attention mechanism has demonstrated to offer great potential in improving the performance of deep convolutional neuralnetworks (CNNs). However, most existing methods dedicate to developing more sophisticated attention modules for achieving better performance,which inevitably increase model complexity. To overcome the paradox of performance and complexity trade-off, this paper proposes an EfficientChannel Attention (ECA) module, which only involves a handful of parameters while bringing clear performance gain. By dissecting the channelattention module in SENet, we empirically show avoiding dimensionality reduction is important for learning channel attention, and appropriatecross-channel interaction can preserve performance while significantly decreasing model complexity. Therefore, we propose a localcross-channel interaction strategy without dimensionality reduction, which can be efficiently implemented via 1D convolution. Furthermore,we develop a method to adaptively select kernel size of 1D convolution, determining coverage of local cross-channel interaction. Theproposed ECA module is efficient yet effective, e.g., the parameters and computations of our modules against backbone of ResNet50 are 80 vs.24.37M and 4.7e-4 GFLOPs vs. 3.86 GFLOPs, respectively, and the performance boost is more than 2% in terms of Top-1 accuracy. We extensivelyevaluate our ECA module on image classification, object detection and instance segmentation with backbones of ResNets and MobileNetV2. Theexperimental results show our module is more efficient while performing favorably against its counterparts.

Citation

@InProceedings{wang2020eca,
   title={ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks},
   author={Qilong Wang, Banggu Wu, Pengfei Zhu, Peihua Li, Wangmeng Zuo and Qinghua Hu},
   booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
   year={2020}
 }

Changelog

2020/02/26 Upload ECA-Resnet34 model.

2020/03/05 Upload RetinaNet-ecanet50 and RetinaNet-ecanet101 model.

2020/03/24 Update the Introduction and Citation.

2020/03/30 Upload ECA-Resnet18 model.

2020/05/06 Update the poster.

ECA module

ECA_module

Comparison of (a) SE block and (b) our efficient channel attention (ECA) module. Given the aggregated feature using global average pooling (GAP), SE block computes weights using two FC layers. Differently, ECA generates channel weights by performing a fast 1D convolution of size k, where k is adaptively determined via a function of channel dimension C.

Installation

Requirements

  • Python 3.5+
  • PyTorch 1.0+
  • thop

Our environments

  • OS: Ubuntu 16.04
  • CUDA: 9.0/10.0
  • Toolkit: PyTorch 1.0/1.1
  • GPU: GTX 2080Ti/TiTan XP

Start Up

Train with ResNet

You can run the main.py to train or evaluate as follow:

CUDA_VISIBLE_DEVICES={device_ids} python main -a {model_name} --ksize {eca_kernel_size} {the path of you datasets}

For example:

CUDA_VISIBLE_DEVICES=0,1,2,3 python main -a eca_resnet50 --ksize 3557 ./datasets/ILSVRC2012/images

Train with MobileNet_v2

It is same with above ResNet replace main.py by light_main.py.

Compute the parameters and FLOPs

If you have install thop, you can paras_flosp.py to compute the parameters and FLOPs of our models. The usage is below:

python paras_flops.py -a {model_name}

Experiments

ImageNet

Model Param. FLOPs Top-1(%) Top-5(%) BaiduDrive(models) Extract code GoogleDrive
ECA-Net18 11.15M 1.70G 70.92 89.93 eca_resnet18_k3577 utsy eca_resnet18_k3577
ECA-Net34 20.79M 3.43G 74.21 91.83 eca_resnet34_k3357 o4dh eca_resnet34_k3357
ECA-Net50 24.37M 3.86G 77.42 93.62 eca_resnet50_k3557 no6u eca_resnet50_k3557
ECA-Net101 42.49M 7.35G 78.65 94.34 eca_resnet101_k3357 iov1 eca_resnet101_k3357
ECA-Net152 57.41M 10.83G 78.92 94.55 eca_resnet152_k3357 xaft eca_resnet152_k3357
ECA-MobileNet_v2 3.34M 319.9M 72.56 90.81 eca_mobilenetv2_k13 atpt eca_mobilenetv2_k13

COCO 2017

Detection with Faster R-CNN and Mask R-CNN

Model Param. FLOPs AP AP_50 AP_75 Pre trained models Extract code GoogleDrive
Fast_R-CNN_ecanet50 41.53M 207.18G 38.0 60.6 40.9 faster_rcnn_ecanet50_k5_bs8_lr0.01 divf faster_rcnn_ecanet50_k5_bs8_lr0.01
Fast_R-CNN_ecanet101 60.52M 283.32G 40.3 62.9 44.0 faster_rcnn_ecanet101_3357_bs8_lr0.01 d3kd faster_rcnn_ecanet101_3357_bs8_lr0.01
Mask_R-CNN_ecanet50 44.18M 275.69G 39.0 61.3 42.1 mask_rcnn_ecanet50_k3377_bs8_lr0.01 xe19 mask_rcnn_ecanet50_k3377_bs8_lr0.01
Mask_R-CNN_ecanet101 63.17M 351.83G 41.3 63.1 44.8 mask_rcnn_ecanet101_k3357_bs8_lr0.01 y5e9 mask_rcnn_ecanet101_k3357_bs8_lr0.01
RetinaNet_ecanet50 37.74M 239.43G 37.3 57.7 39.6 RetinaNet_ecanet50_k3377_bs8_lr0.01 my44 RetinaNet_ecanet50_k3377_bs8_lr0.01
RetinaNet_ecanet101 56.74M 315.57G 39.1 59.9 41.8 RetinaNet_ecanet101_k3357_bs8_lr0.01 2eu5 RetinaNet_ecanet101_k3357_bs8_lr0.01

Instance segmentation with Mask R-CNN

Model Param. FLOPs AP AP_50 AP_75 Pre trained models Extract code GoogleDrive
Mask_R-CNN_ecanet50 44.18M 275.69G 35.6 58.1 37.7 mask_rcnn_ecanet50_k3377_bs8_lr0.01 xe19 mask_rcnn_ecanet50_k3377_bs8_lr0.01
Mask_R-CNN_ecanet101 63.17M 351.83G 37.4 59.9 39.8 mask_rcnn_ecanet101_k3357_bs8_lr0.01 y5e9 mask_rcnn_ecanet101_k3357_bs8_lr0.01
RetinaNet_ecanet50 37.74M 239.43G 35.6 58.1 37.7 RetinaNet_ecanet50_k3377_bs8_lr0.01 my44 RetinaNet_ecanet50_k3377_bs8_lr0.01
RetinaNet_ecanet101 56.74M 315.57G 37.4 59.9 39.8 RetinaNet_ecanet101_k3357_bs8_lr0.01 2eu5 RetinaNet_ecanet101_k3357_bs8_lr0.01

Contact Information

If you have any suggestion or question, you can leave a message here or contact us directly: [email protected] . Thanks for your attention!

About

Code for ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%