Skip to content

Code for paper "StructureFlow: Image Inpainting via Structure-aware Appearance Flow"

License

Notifications You must be signed in to change notification settings

RenYurui/StructureFlow

Repository files navigation

StructureFlow

Code for our paper "StructureFlow: Image Inpainting via Structure-aware Appearance Flow" (ICCV 2019)

Introduction

We propose a two-stage image inpainting network which splits the task into two parts: structure reconstruction and texture generation. In the first stage, edge-preserved smooth images are employed to train a structure reconstructor which completes the missing structures of the inputs. In the second stage, based on the reconstructed structures, a texture generator using appearance flow is designed to yield image details.

(From left to right) Input corrupted images, reconstructed structure images, visualizations of the appearance flow fields, final output images. To visualize the appearance flow fields, we plot the sample points of some typical missing regions. The arrows show the direction of the appearance flow.

Requirements

  1. Pytorch >= 1.0
  2. Python 3
  3. NVIDIA GPU + CUDA 9.0
  4. Tensorboard
  5. Matlab

Installation

  1. Clone this repository

    git clone https://github.com/RenYurui/StructureFlow
  2. Build Gaussian Sampling CUDA package

    cd ./StructureFlow/resample2d_package
    python setup.py install --user

Running

1. Image Prepare

We train our model on three public datasets including Places2, Celeba, and Paris StreetView. We use the irregular mask dataset provided by PConv. You can download these datasets from their project website.

  1. Places2
  2. CelebA
  3. Paris Street-View
  4. Irregular Masks

After downloading the datasets, The edge-preserved smooth images can be obtained by using RTV smooth method. Run generation function scripts/matlab/generate_structre_images.m in your matlab. For example, if you want to generate smooth images for Places2, you can run the following code:

generate_structure_images("path to Places2 dataset root", "path to output folder");

Finally, you can generate the image list using script scripts/flist.py for training and testing.

2. Training

To train our model, modify the model config file model_config.yaml. You may need to change the path of dataset or the parameters of the networks etc. Then run the following code:

python train.py \
--name=[the name of your experiment] \
--path=[path save the results] 

3. Testing

To output the generated results of the inputs, you can use the test.py. Please run the following code:

python test.py \
--name=[the name of your experiment] \
--path=[path of your experiments] \
--input=[input images] \
--mask=[mask images] \
--structure=[structure images] \
--output=[path to save the output images] \
--model=[which model to be tested]

To evaluate the model performance over a dateset, you can use the provided script ./scripts/matric.py. This script can provide the PSNR, SSIM and Fréchet Inception Distance (FID score) of the results.

python ./scripts/metrics.py \
--input_path=[path to ground-truth images] \ 
--output_path=[path to model outputs] \
--fid_real_path=[path to the real images using to calculate fid]

The pre-trained weights can be downloaded from Places2, Celeba, Paris Street.

Download the checkpoints and save them to './path_of_your_experiments/name_of_your_experiment/checkpoints'

For example you can download the checkpoints of Places2 and save them to './results/places/checkpoints' and run the following code:

python test.py \
--name=places \
--path=results \
--input=./example/places/1.jpg \
--mask=./example/places/1_mask.png \
--structure=./example/places/1_tsmooth.png \
--output=./result_images \
--model=3

Citation

If you find this code is helpful for your research, please cite our paper:

@inproceedings{ren2019structureflow,
      author = {Ren, Yurui and Yu, Xiaoming and Zhang, Ruonan and Li, Thomas H. and Liu, Shan and Li, Ge},
      title = {StructureFlow: Image Inpainting via Structure-aware Appearance Flow},
      booktitle={IEEE International Conference on Computer Vision (ICCV)},
      year = {2019}
}

Acknowledgements

We built our code based on Edge-Connect. Part of the code were derived from FlowNet2. Please consider to cite their papers.

About

Code for paper "StructureFlow: Image Inpainting via Structure-aware Appearance Flow"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published