Skip to content

Image to image translation using conditional generative adversarial network. It contains two parts generator and a discriminator. Generator is a U-net and discriminator is a patchGAN.

Notifications You must be signed in to change notification settings

kulkarnikeerti/Image-to-image-translation-using-cGAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Image-to-image-translation-using-cGAN

Implementing: https://arxiv.org/abs/1611.07004

Generator is a U-net with skipp connection and Discriminator is a patchGAN. The input to the generator is a series of randomly generated numbers called latent sample. Discriminator is trained using both the original dataset and the images generated by the generator. If the input is from the original dataset, then discriminator should classify it as real and if the input is from the generator, then it should classify it as fake.

Objective of the cGAN is:

Prerequisite:

  • Keras
  • Tensorflow 2.0
  • Numpy
  • Matplotlib

Dataset:

This model can be used on different datasets. Few of the available dataset can be found https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/

I have used facades dataset in this case.

Training:

cGAN model trained for 100 epochs with batch size as 1. And generated images and weights of the model are saved after every 10 epochs. Once the training is done, these generated images are comapred with the target images, and the one which produces the more realistic images is selected as a final model.

In this case, I could achieve better results after 60 epochs. And the same result is shown below.

Results:

First row indicates the source images, middle row is for generated images from the generator and the last row is for the target images. Reffer for detailed understanding of the GAN: https://medium.com/@keertikkulkarni12/image-to-image-translation-gan-and-conditional-gan-f995901de39

About

Image to image translation using conditional generative adversarial network. It contains two parts generator and a discriminator. Generator is a U-net and discriminator is a patchGAN.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published