Official PyTorch implementation of the paper "Domain-Agnostic Crowd Counting via Uncertainty-Guided Style Diversity Augmentation" accepted at ACM Multimedia 2024.
We provide the pre-trained models for download via Google Drive:
The following environment setup was used to ensure reproducibility:
- Python 3.8
- CUDA Toolkit 11.3.1
- PyTorch 1.11.0
- NumPy 1.23.0
- Matplotlib 3.6.2
- Pandas 2.0.3
- Pillow 9.4.0
Ensure these dependencies are installed using the following command:
pip install -r requirements.txt
We provide code to test our models. The provided models SHHA_parameter.pth
and SHHB_parameter.pth
were trained on the ShanghaiTech Part A (SHHA) and Part B (SHHB) datasets, respectively.
To visualize model performance on sample images, run the following command:
python test_vis_single.py
Visualizations for selected results are included in the images
folder. You can modify the script to test on other images.
We also provide a method to test on public/custom datasets. The dataset should be organized in the following structure:
└── datasets
└── dataset_name
└── test
├── den
│ ├── 1.csv
│ ├── 2.csv
│ └── ...
└── img
├── 1.jpg
├── 2.jpg
└── ...
Once your dataset is properly structured, you can run the following command to test:
python test.py
For detailed explanations of the network, please refer to our paper.
If you use this code in your research, please cite our paper:
@inproceedings{ding2024domain,
title={Domain-Agnostic Crowd Counting via Uncertainty-Guided Style Diversity Augmentation},
author={Ding, Guanchen and Liu, Lingbo and Chen, Zhenzhong and Chen, Chang Wen},
booktitle={ACM Multimedia 2024}
}
This codebase builds on and acknowledges the following repositories:
We thank the authors of these repositories for their contributions to the community.
If you have any questions or issues, please feel free to reach out to me at: [email protected]