Skip to content

Latest commit

 

History

History
49 lines (37 loc) · 2.32 KB

README.md

File metadata and controls

49 lines (37 loc) · 2.32 KB

A2S-USOD

Our new A2S-v2 framework is accepted by CVPR 2023!

In this work, we propose a new framework for Unsupervised Salient Object Detection (USOD) task.
Details are illustrated in our paper: "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection".
Contact: [email protected].

Update 2022/06/05

Code is available now!
Our code is based on our SOD benchmark.
Pretrained backbone: MoCo-v2.
Our trained weights: Stage1-moco, Stage1-sup, Stage2-moco, Stage2-sup.

Here we provide the generated saliency maps of our method in Google Drive: Pseudo labels (Stage 1) and Saliency maps (Stage 2), or download from Baidu Disk [g6xb].

Usage

# Stage 1
python3 train_stage1.py mnet --gpus=0 
python3 test.py mnet --weight=path_to_weight --gpus=0 --crf --save
# Copy the generated pseudo labels to dataset folder

# Stage 2
python3 train_stage2.py cornet --gpus=0
python3 test.py cornet --weight=path_to_weight --gpus=0 [--save] [--crf]

# To evaluate generated maps:
python3 eval.py --pre_path=path_to_maps

Results

Result

Thanks for citing our work

@ARTICLE{zhou2023a2s1,
  title={Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection}, 
  author={Zhou, Huajun and Chen, Peijia and Yang, Lingxiao and Xie, Xiaohua and Lai, Jianhuang},
  journal={IEEE Transactions on Circuits and Systems for Video Technology}, 
  year={2023},
  volume={33},
  number={2},
  pages={743-755},
  doi={10.1109/TCSVT.2022.3203595}}