This repository contains code demonstrating the Co-learn++ method in our IJCV 2024 paper Source-Free Domain Adaptation Guided by Vision and Vision-Language Pre-Training. This is an extension of the Co-learn method in our ICCV 2023 paper Rethinking the Role of Pre-Trained Networks in Source-Free Domain Adaptation.
We used NVIDIA container image for PyTorch, release 22.01, to run experiments.
Install additional libraries with pip install -r requirements.txt
.
- Please manually download the datasets Office, Office-Home, VisDA-C, DomainNet from the official websites, and modify the path of images in each '.txt' under the folder
./code/data/
. Scripts to generate the txt files are in the respective data folders.
- Training scripts in
./code/uda/scripts
. Runeval_target_zeroshot.sh
for zero-shot CLIP andtrain_target_two_branch.sh
for co-learning with CLIP encoder. - Results consolidation scripts in
./code/uda/consolidation_scripts
.
@article{zhang2024colearnplus,
author = {Zhang, Wenyu and Shen, Li and Foo, Chuan-Sheng},
year = {2024},
month = {08},
pages = {1-23},
title = {Source-Free Domain Adaptation Guided by Vision and Vision-Language Pre-training},
journal = {International Journal of Computer Vision},
doi = {10.1007/s11263-024-02215-3}
}
@inproceedings{zhang2023colearn,
author = {Zhang, Wenyu and Shen, Li and Foo, Chuan-Sheng},
booktitle = {2023 IEEE/CVF International Conference on Computer Vision (ICCV)},
title = {Rethinking the Role of Pre-Trained Networks in Source-Free Domain Adaptation},
year = {2023},
volume = {},
issn = {},
pages = {18795-18805},
doi = {10.1109/ICCV51070.2023.01727},
url = {https://doi.ieeecomputersociety.org/10.1109/ICCV51070.2023.01727},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
month = {oct}
}
Our implementation is based off SHOT++. Thanks to the SHOT++ implementation.