The aim of this project is to explore methods of generating unique, realistic or artistic textures without needing a large source dataset. I have implemented two different methods for generating textures using convolutional neural networks.
- Training a GAN to generate textures using two different models: DCGAN and PSGAN. (The code is designed to be easily extended to support additional generator & discriminator models.)
- Optimising on image pixels as done by Gatys et al.
Prerequisites:
- Python3
- Pip
- PyTorch
Clone and unzip this repository.
Then simply run pip install TextureGeneration-master
You must have installed PyTorch from here before installing this application.
If you want to change where the application will store models and textures then edit the file config/variables.py
prior to installation.
Run python setup.py test
before starting to run the test suite
Command | Result |
---|---|
texture_gan train -h |
to train a new GAN model |
texture_gan demo -h |
to generate an image using a trained model |
texture_gan animate -h |
to generate an animated GIF using a trained model |
texture_gatys -h |
to generate an image from the style of a source image |
An example of the kind of results which this application can achieve. Both the generated images shown below are fully tile-able.
Method | Source Image | Result |
---|---|---|
texture_gatys |
||
texture_gan |
I have set up a website here to compare the generation process between approaches and to showcase some generated images and GIFs.