Skip to content

Latest commit

 

History

History
executable file
·
82 lines (47 loc) · 2.33 KB

README.md

File metadata and controls

executable file
·
82 lines (47 loc) · 2.33 KB

Pluralistic Image Completion with Gaussian Mixture Models (PICMM)

This repository is the official pytorch implementation of our NeurIPS 2022 paper, Pluralistic Image Completion with Gaussian Mixture Models.

Xiaobo Xia1*, Wenhao Yang2*, Jie Ren3, Yewen Li4, Yibing Zhan5, Bo Han6, Tongliang Liu1
1The University of Sydney, 2Nanjing University, 3The University of Edinburgh, 4Nanyang Technological University, 5JD Explore Academy, 6Hong Kong Baptist University
* Equal contributions

Prerequisites

  • Python >=3.7
  • NVIDIA GPU + CUDA cuDNN
pip install -r requirements.txt

Pipeline

Training

python train.py --name [exp_name] \
                --k [numbers_of_distributions] \
                --img_file [training_image_path]

Notes of training process:

  • Our code supports

Inference

python test.py --name [exp_name] \
               --k [numbers_of_distributions] \
               --which_iter [which_iterations_to_load] \
               --img_file [training_image_path] \
               --sample_num [number_of_diverse_results_to_sample]

Notes of inference:

  • --sample_num: How many completion results do you want?

Citation

If you find our work useful for your research, please consider citing the following papers :)

@inproceedings{xia2022pluralistic,
  title={Pluralistic Image Completion with Probabilistic Mixture-of-Experts},
  author={Xia, Xiaobo and Yang, Wenhao and Ren, Jie and Li, Yewen and Zhan, Yibing and Han, Bo and Liu, Tongliang},
  booktitle={NeurIPS},
  year={2022}
}

Acknowledgments

The authors would give special thanks to Mingrui Zhu (Xidian University), Zihan Ding (Princeton University), and Chenlai Qian (Southeast University) for helpful discussions and comments.

Contact

This repo is currently maintained by Xiaobo Xia and Wenhao Yang, which is only for academic research use. If you have any problems about the implementation of our code, feel free to contact [email protected]. Discussions and questions about our paper are welcome via [email protected].