by Jun Wei, Qin Wang, Zhen Li, Sheng Wang, S.Kevin Zhou, Shuguang Cui
Weakly supervised object localization (WSOL) aims to localize objects by only utilizing image-level labels. Class activation maps (CAMs) are the commonly used features to achieve WSOL. However, previous CAM-based methods did not take full advantage of the shallow features, despite their importance for WSOL. Because shallow features are easily buried in background noise through conventional fusion. In this paper, we propose a simple but effective Shallow feature-aware Pseudo supervised Object Localization (SPOL) model for accurate WSOL, which makes the utmost of low-level features embedded in shallow layers. In practice, our SPOL model first generates the CAMs through a novel element-wise multiplication of shallow and deep feature maps, which filters the background noise and generates sharper boundaries robustly. Besides, we further propose a general class-agnostic segmentation model to achieve the accurate object mask, by only using the initial CAMs as the pseudo label without any extra annotation. Eventually, a bounding box extractor is applied to the object mask to locate the target. Experiments verify that our SPOL outperforms the state-of-the-art on both CUB-200 and ImageNet-1K benchmarks, achieving 93.44% and 67.15% (i.e., 3.93% and 2.13% improvement) Top-5 localization accuracy, respectively.
git clone https://github.com/weijun88/SPOL.git
cd SPOL/
SPOL
├─cub200-cam
│ └─out
├─cub200-cls
│ └─out
├─cub200-seg
│ └─out
├─dataset
│ ├─CUB_200_2011
│ │ ├─image
│ │ │ ├─001.Black_footed_Albatross
│ │ │ └─002.Laysan_Albatross
│ │ └─mask
│ │ ├─001.Black_footed_Albatross
│ │ └─002.Laysan_Albatross
│ └─ImageNet2012
│ ├─box
│ ├─train
│ │ ├─n01440764
│ │ └─n01443537
│ └─val
├─imagenet-cam
│ └─out
├─imagenet-cls
├─imagenet-seg
│ └─out
├─network
├─resource
└─utils
- Download the following datasets and unzip them into
dataset
folder
- If you want to evaluate the performance of SPOL, please download our trained model
- CUB-200-2011: Baidu: tjtn | Google , put it into the folder
cub200-seg/out
- ImageNet-1K: Baidu: poz8 | Google , put it into the folder
imagenet-seg/out
- CUB-200-2011: Baidu: tjtn | Google , put it into the folder
- If you want to train your own model, please download the pretrained model into
resource
folder
cd cub200-cls
python3 train.py # train the classification model
python3 test.py # keep the classification result to top1top5.npy
cd ../cub200-cam
python3 train.py # train the CAM model
python3 test.py # keep the pseudo masks
cd ../cub200-seg
python3 train.py # train the segmentation model
python3 test.py # evaluate the localization accuracy
cd imagenet-cls
python3 test.py # keep the classification result to top1top5.npy
cd ../imagenet-cam
python3 train.py # train the CAM model
python3 test.py # keep the pseudo masks
cd ../imagenet-seg
python3 train.py # train the segmentation model
python3 test.py # evaluate the localization accuracy
- If you only want to evaluate the model performance, please download the our traind model, as above mentioned. And execute the following codes.
cd cub200-seg
python3 test.py # evaluate the model performance about cub-200-2011
cd imagenet-seg
python3 test.py # evaluate the model performance about ImageNet-1k
- If you find this work is helpful, please cite our paper
@InProceedings{Wei_2021_CVPR,
author = {Wei, Jun and Wang, Qin and Li, Zhen and Wang, Sheng and Zhou, S. Kevin and Cui, Shuguang},
title = {Shallow Feature Matters for Weakly Supervised Object Localization},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {5993-6001}
}