Skip to content

Latest commit

 

History

History
136 lines (92 loc) · 5.31 KB

README.md

File metadata and controls

136 lines (92 loc) · 5.31 KB

neural-style

An implementation of neural style in TensorFlow.

This implementation is a lot simpler than a lot of the other ones out there, thanks to TensorFlow's really nice API and automatic differentiation.

TensorFlow doesn't support L-BFGS (which is what the original authors used), so we use Adam. This may require a little bit more hyperparameter tuning to get nice results.

See here for an implementation of fast (feed-forward) neural style in TensorFlow.

Running

python neural_style.py --content <content file> --styles <style file> --output <output file>

Run python neural_style.py --help to see a list of all options.

Use --checkpoint-output and --checkpoint-iterations to save checkpoint images.

Use --iterations to change the number of iterations (default 1000). For a 512×512 pixel content file, 1000 iterations take 2.5 minutes on a GeForce GTX Titan X GPU, or 90 minutes on an Intel Core i7-5930K CPU.

Example 1

Running it for 500-2000 iterations seems to produce nice results. With certain images or output sizes, you might need some hyperparameter tuning (especially --content-weight, --style-weight, and --learning-rate).

The following example was run for 1000 iterations to produce the result (with default parameters):

output

These were the input images used (me sleeping at a hackathon and Starry Night):

input-content

input-style

Example 2

The following example demonstrates style blending, and was run for 1000 iterations to produce the result (with style blend weight parameters 0.8 and 0.2):

output

The content input image was a picture of the Stata Center at MIT:

input-content

The style input images were Picasso's "Dora Maar" and Starry Night, with the Picasso image having a style blend weight of 0.8 and Starry Night having a style blend weight of 0.2:

input-style input-style

Tweaking

--style-layer-weight-exp command line argument could be used to tweak how "abstract" the style transfer should be. Lower values mean that style transfer of a finer features will be favored over style transfer of a more coarse features, and vice versa. Default value is 1.0 - all layers treated equally. Somewhat extreme examples of what you can achieve:

--style-layer-weight-exp 0.2 --style-layer-weight-exp 2.0

(left: 0.2 - finer features style transfer; right: 2.0 - coarser features style trasnfer)

--content-weight-blend specifies the coefficient of content transfer layers. Default value - 1.0, style transfer tries to preserve finer grain content details. The value should be in range [0.0; 1.0].

--content-weight-blend 1.0 --content-weight-blend 0.1

(left: 1.0 - default value; right: 0.1 - more abstract picture)

--pooling allows to select which pooling layers to use (specify either max or avg). Original VGG topology uses max pooling, but the style transfer paper suggests replacing it with average pooling. The outputs are perceptually differnt, max pool in general tends to have finer detail style trasnfer, but could have troubles at lower-freqency detail level:

--pooling max --pooling avg

(left: max pooling; right: average pooling)

--preserve-colors boolean command line argument adds post-processing step, which combines colors from the original image and luma from the stylized image (YCbCr color space), thus producing color-preserving style trasnfer:

--pooling max --pooling max

(left: original stylized image; right: color-preserving style transfer)

Requirements

Citation

If you use this implementation in your work, please cite the following:

@misc{athalye2015neuralstyle,
  author = {Anish Athalye},
  title = {Neural Style},
  year = {2015},
  howpublished = {\url{https://github.com/anishathalye/neural-style}},
  note = {commit xxxxxxx}
}

License

Copyright (c) 2015-2017 Anish Athalye. Released under GPLv3. See LICENSE.txt for details.