Skip to content

Latest commit

 

History

History
117 lines (93 loc) · 3.54 KB

README.md

File metadata and controls

117 lines (93 loc) · 3.54 KB

Generative Image Inpainting Based on Wavelet Transform Attention Model

ISCAS 2020 Paper | Poster | Project | BibTex

Update (Sep, 2021):

  1. The tech report of our new image inpainting system MuFA-Net is released, please checkout branch v2.0.0
  2. WTAM is trained and mainly works on rectangular masks, while MuFA-Net can generate high quality inpainting results for variable masks.

Example inpainting results

 
Input Ours(U-net) Ours(UNet++) Ground-truth

Network Architecture

Overall Framework

Wavelet Transform Attention Model (WTAM)

Run

  1. Requirements:
    • Install python3.
    • Install PyTorch (tested on Release>=0.4.0).
    • Install python libraries visdom and dominate.
  2. Training:
    • Prepare training image datasets.
    • Modify base_options.py to set parameters.
    • Run python train.py.
  3. Testing:
    • Prepare testing image datasets.
    • Modify test_options.py to set parameters.
    • Run python test.py.

Pretrained models

[Paris StreetView] | [CelebA-HQ]

Rename face_center_mask.pth to 30_net_G.pth, and put it in the folder ./log/face_center_mask(if not existed, create it)

# CelebA-HQ 256x256 input
python test.py --which_model_netG='WTAM' --model='WTAM' --name='face_center_mask' --which_epoch=30 --dataroot='./datasets/test' `.

Note: For models trained with extra irregular masks, make sure --offline_loading_mask=1 --testing_mask_folder='masks'.

Visdom

To view training results and loss plots, run python -m visdom.server and click the URL http://localhost:8097. The checkpoints will be saved in ./log by default.

Citing

If you use this code, please consider citing:

@inproceedings{wang2020generative,
  title={Generative Image Inpainting Based on Wavelet Transform Attention Model},
  author={Wang, Chen and Wang, Jin and Zhu, Qing and Yin, Baocai},
  booktitle={ISCAS},
  pages={1--5},
  year={2020},
  organization={IEEE}
}

Contacts

Please contact [email protected] or open an issue for any questions or suggestions.

Thanks! (●'◡'●)

Acknowledgments

Thanks the author of Shift-Net_pytorch for their excellent work.