ISCAS 2020 Paper | Poster | Project | BibTex
Update (Sep, 2021):
- The tech report of our new image inpainting system MuFA-Net is released, please checkout branch v2.0.0
- WTAM is trained and mainly works on rectangular masks, while MuFA-Net can generate high quality inpainting results for variable masks.
Input | Ours(U-net) | Ours(UNet++) | Ground-truth |
---|
Overall Framework |
Wavelet Transform Attention Model (WTAM) |
- Requirements:
- Training:
- Prepare training image datasets.
- Modify base_options.py to set parameters.
- Run
python train.py
.
- Testing:
- Prepare testing image datasets.
- Modify test_options.py to set parameters.
- Run
python test.py
.
[Paris StreetView] | [CelebA-HQ]
Rename face_center_mask.pth
to 30_net_G.pth
, and put it in the folder ./log/face_center_mask
(if not existed, create it)
# CelebA-HQ 256x256 input
python test.py --which_model_netG='WTAM' --model='WTAM' --name='face_center_mask' --which_epoch=30 --dataroot='./datasets/test' `.
Note: For models trained with extra irregular masks, make sure --offline_loading_mask=1 --testing_mask_folder='masks'
.
To view training results and loss plots, run python -m visdom.server
and click the URL http://localhost:8097. The checkpoints will be saved in ./log
by default.
If you use this code, please consider citing:
@inproceedings{wang2020generative,
title={Generative Image Inpainting Based on Wavelet Transform Attention Model},
author={Wang, Chen and Wang, Jin and Zhu, Qing and Yin, Baocai},
booktitle={ISCAS},
pages={1--5},
year={2020},
organization={IEEE}
}
Please contact [email protected] or open an issue for any questions or suggestions.
Thanks! (●'◡'●)
Thanks the author of Shift-Net_pytorch for their excellent work.