Robust Pseudo-label Learning with Neighbor Relation for Unsupervised Visible-Infrared Person Re-Identification
Xiangbo Yin* · Jiangming Shi* · Yachao Zhang · Yang Lu · Zhizhong Zhang · Yuan Xie† · Yanyun Qu†
This is an official code implementation of "Robust Pseudo-label Learning with Neighbor Relation for Unsupervised Visible-Infrared Person Re-Identification", Which is accepted by ACMMM 2024.
- python 3.8.13
- torch 1.8.0
- torchvision 0.9.0
- scikit-learn 1.2.2
- POT 0.9.3
python prepare_sysu.py # for SYSU-MM01
python prepare_regdb.py # for RegDB
You need to change the dataset path to your own path in the prepare_sysu.py
and prepare_regdb.py
.
sh run_train_sysu.sh # for SYSU-MM01
sh run_train_regdb.sh # for RegDB
You need to download the baseline for SYSU-MM01 dataset and put it in baseline/sysu_s1/
.
sh test_sysu.sh # for SYSU-MM01
sh test_regdb.sh # for RegDB
If our work is helpful for your research, please consider citing:
@article{yin2024robust,
title={Robust Pseudo-label Learning with Neighbor Relation for Unsupervised Visible-Infrared Person Re-Identification},
author={Yin, Xiangbo and Shi, Jiangming and Zhang, Yachao and Lu, Yang and Zhang, Zhizhong and Xie, Yuan and Qu, Yanyun},
journal={arXiv preprint arXiv:2405.05613},
year={2024}
}
@article{shi2024multi,
title={Multi-Memory Matching for Unsupervised Visible-Infrared Person Re-Identification},
author={Shi, Jiangming and Yin, Xiangbo and Chen, Yeyun and Zhang, Yachao and Zhang, Zhizhong and Xie, Yuan and Qu, Yanyun},
journal={arXiv preprint arXiv:2401.06825},
year={2024}
}
@inproceedings{shi2023dpis,
title={Dual pseudo-labels interactive self-training for semi-supervised visible-infrared person re-identification},
author={Shi, Jiangming and Zhang, Yachao and Yin, Xiangbo and Xie, Yuan and Zhang, Zhizhong and Fan, Jianping and Shi, Zhongchao and Qu, Yanyun},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={11218--11228},
year={2023}
[email protected]; [email protected].
This work is developed based on works of ADCA(ACMMM2022), PGM(CVPR2023), OTLA(ECCV2022). We sincerely thank all developers for their high-quality works.