The 'leon' branch contains my modifications to Justin Johnson's fast-neural-style code (https://github.com/jcjohnson/fast-neural-style.git) in order to reproduce the control results for Fast Neural Style Transfer (Fig. 6) from the paper "Controlling Perceptual Factors in Neural Style Transfer" (https://arxiv.org/abs/1611.07865).
To run the examples you first need to download the pre-trained models:
sh models/download_leon_models.sh
Now you should be able to run the RunNetworkExample.ipynb notebook that reproduces the luminance and spatial guidance figure from the paper!
To train the networks yourself you additionally need to download the vgg-16 network (sh models/download_vgg16.sh
) and make the training data from the MS-COCO dataset by using the script at: scripts/make_style_dataset.py
.
For luminance networks you need to make luminance training data by using the --lum flag for above script.
The TrainNetworkExample.ipynb notebook expects the training data at fast-neural-style/data/ms-coco-256.h5 for colour training and at fast-neural-style/data/ms-coco-256-lum.h5 for luminance training.
An easy way to get all prerequisites to run the code is to use our docker container. A description how to use it is given at https://github.com/leongatys/NeuralImageSynthesis/blob/master/README.md#prerequisites .
This is the code for the paper
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
Justin Johnson,
Alexandre Alahi,
Li Fei-Fei
To appear at ECCV 2016
The paper builds on A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge by training feedforward neural networks that apply artistic styles to images. After training, our feedforward networks can stylize images hundreds of times faster than the optimization-based method presented by Gatys et al.
This repository also includes an implementation of instance normalization as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization by Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. This simple trick significantly improves the quality of feedforward style transfer models.
Stylizing this image of the Stanford campus at a resolution of 1200x630 takes 50 milliseconds on a Pascal Titan X:
In this repository we provide:
- The style transfer models used in the paper
- Additional models using instance normalization
- Code for running models on new images
- A demo that runs models in real-time off a webcam
- Code for training new feedforward style transfer models
- An implementation of optimization-based style transfer method described by Gatys et al.
If you find this code useful for your research, please cite
@inproceedings{Johnson2016Perceptual,
title={Perceptual losses for real-time style transfer and super-resolution},
author={Johnson, Justin and Alahi, Alexandre and Fei-Fei, Li},
booktitle={European Conference on Computer Vision},
year={2016}
}
All code is implemented in Torch.
First install Torch, then update / install the following packages:
luarocks install torch
luarocks install nn
luarocks install image
luarocks install lua-cjson
If you have an NVIDIA GPU, you can accelerate all operations with CUDA.
First install CUDA, then update / install the following packages:
luarocks install cutorch
luarocks install cunn
When using CUDA, you can use cuDNN to accelerate convolutions.
First download cuDNN and copy the
libraries to /usr/local/cuda/lib64/
. Then install the Torch bindings for cuDNN:
luarocks install cudnn
Download all pretrained style transfer models by running the script
bash models/download_style_transfer_models.sh
This will download ten model files (~200MB) to the folder models/
.
The style transfer models we used in the paper will be located in the folder models/eccv16
.
Here are some example results where we use these models to stylize this
image of the Chicago skyline with at an image size of 512:
As discussed in the paper Instance Normalization: The Missing Ingredient for Fast Stylization by Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky, replacing batch normalization with instance normalization significantly improves the quality of feedforward style transfer models.
We have trained several models with instance normalization; after downloading
pretrained models they will be in the folder models/instance_norm
.
These models use the same architecture as those used in our paper, except with half the number of filters per layer and with instance normalization instead of batch normalization. Using narrower layers makes the models smaller and faster without sacrificing model quality.
Here are some example outputs from these models, with an image size of 1024:
The script fast_neural_style.lua
lets you use a trained model to stylize new images:
th fast_neural_style.lua \
-model models/eccv16/starry_night.t7 \
-input_image images/content/chicago.jpg \
-output_image out.png
You can run the same model on an entire directory of images like this:
th fast_neural_style.lua \
-model models/eccv16/starry_night.t7 \
-input_dir images/content/ \
-output_dir out/
You can control the size of the output images using the -image_size
flag.
By default this script runs on CPU; to run on GPU, add the flag -gpu
specifying the GPU on which to run.
The full set of options for this script is described here.
You can use the script webcam_demo.lua
to run one or more models in real-time
off a webcam stream. To run this demo you need to use qlua
instead of th
:
qlua webcam_demo.lua -models models/instance_norm/candy.t7 -gpu 0
You can run multiple models at the same time by passing a comma-separated list
to the -models
flag:
qlua webcam_demo.lua \
-models models/instance_norm/candy.t7,models/instance_norm/udnie.t7 \
-gpu 0
With a Pascal Titan X you can easily run four models in realtime at 640x480:
The webcam demo depends on a few extra Lua packages:
You can install / update these packages by running:
luarocks install camera
luarocks install qtlua
The full set of options for this script is described here.
You can find instructions for training new models here.
The script slow_neural_style.lua
is similar to the
original neural-style, and uses
the optimization-based style-transfer method described by Gatys et al.
This script uses the same code for computing losses as the feedforward training script, allowing for fair comparisons between feedforward style transfer networks and optimization-based style transfer.
Compared to the original neural-style, this script has the following improvements:
- Remove dependency on protobuf and loadcaffe
- Support for many more CNN architectures, including ResNets
The full set of options for this script is described here.
Free for personal or research use; for commercial use please contact me.