Neural Style Transfer done from the CLI using a VGG backbone and presented as an MP4.
Weights can be downloaded from here. The downloaded file (renamed to vgg_conv_weights.pth
) should be placed in ./weights/
and it will be ignored when pushing, as seen in ./.gitignore
. Update: Alternatively, if the ./weights/
directory is empty, ./neuralart.py
will automatically download publicly available VGG19 weights for the user.
More in depth information about Neural Style Transfer ( NST ) can be found in this great paper. Make sure to check Requirements and Usage as well as the Video Gallery.
Because Style Transfer hasn't changed drastically in terms of actual results in the past years. I personally find a certain beauty in inputting a style and content image rather than a well curated prompt with a dozen of switches. Consider this repo as a quick and simple just works solution that can run on both CPU and GPU effectively.
I developed this tool as a means to obtain fancy images and visuals for me and my friends. It somehow grew into something bigger that is actually usable, so much so that I got to integrate it in a workflow in conjunction with Stable Diffusion ( see also here ) which I want to develop a plugin for in the near future.
Clone the repository:
git clone https://github.com/xAlpharax/neural-art
# or via ssh
git clone [email protected]:xAlpharax/neural-art.git
Create a virtual environment to separate the required packages from system-wide packages:
virtualenv path/to/neural-art
source path/to/neural-art/bin/activate
( ! ) When you're finished with the environment:
# deactivate
All the required packages are listed in ./requirements.txt
as per python etiquette:
pip install -r requirements.txt
The main script sits comfortably in ./stylize.sh
, run it from the project's root directory:
./stylize.sh path/to/style_image path/to/content_image
A helper script is also available to run ./stylize.sh
for each distinct pair of images present in the ./Images/
directory:
./all.sh
Moreover, ./all.sh
is aware of the already rendered mp4 files and will skip stylizing the combinations that are already present. In contrast, ./stylize.sh
overwrites images and videos.
The stylization process outputs a video in the format ./content_in_style.mp4
with content
and style
being the 2nd and 1st command line arguments of the ./stylize.sh
script.
If, at any point, you need the individual frames that comprise the generated ./content_in_style.mp4
, check the ./Output/
directory for .png
images with frames at each iteration.
The ./neuralart.py
code that sits at the heart of this project generates raw numpy array data to ./images.npy
which in turn is manipulated by ./renderer.py
to output frames as .png
images.
These intermediary outputs are temporarily stored and get removed each time the ./stylize.sh
script is run.
All the stylize combinations from the ./Images/
directory have been saved to this archive. Check the video gallery below to go through some of them that look the best:
Starry Night in various other styles 8
Starry_Night_in_Monet.mp4
Starry_Night_in_Azzalee.mp4
Starry_Night_in_Colorful.mp4
Starry_Night_in_Jitter_Doll.mp4
Starry_Night_in_Shade.mp4
Starry_Night_in_Abstract.mp4
Starry_Night_in_Gift.mp4
Starry_Night_in_bunnies.mp4
Monet in various other styles 7
Monet_in_Starry_Night.mp4
Monet_in_Azzalee.mp4
Monet_in_Colorful.mp4
Monet_in_Jitter_Doll.mp4
Monet_in_Shade.mp4
Monet_in_Abstract.mp4
Monet_in_bunnies.mp4
Colorful in various other styles 6
Colorful_in_Starry_Night.mp4
Colorful_in_Monet.mp4
Colorful_in_Azzalee.mp4
Colorful_in_Jitter_Doll.mp4
Colorful_in_Shade.mp4
Colorful_in_bunnies.mp4
Azzalee in various other styles 5
Azzalee_in_Starry_Night.mp4
Azzalee_in_Monet.mp4
Azzalee_in_Jitter_Doll.mp4
Azzalee_in_Shade.mp4
Azzalee_in_bunnies.mp4
Jitter Doll in various other styles 5
Jitter_Doll_in_Starry_Night.mp4
Jitter_Doll_in_Monet.mp4
Jitter_Doll_in_Azzalee.mp4
Jitter_Doll_in_Colorful.mp4
Jitter_Doll_in_Shade.mp4
Shade in various other styles 7
Shade_in_Starry_Night.mp4
Shade_in_Monet.mp4
Shade_in_Azzalee.mp4
Shade_in_Colorful.mp4
Shade_in_Jitter_Doll.mp4
Shade_in_bunnies.mp4
Shade_in_Abstract.mp4
Abstract in various other styles 6
Abstract_in_Starry_Night.mp4
Abstract_in_Monet.mp4
Abstract_in_Colorful.mp4
Abstract_in_Jitter_Doll.mp4
Abstract_in_Shade.mp4
Abstract_in_bunnies.mp4
Gift in various other styles 5
Gift_in_Starry_Night.mp4
Gift_in_Monet.mp4
Gift_in_Azzalee.mp4
Gift_in_Jitter_Doll.mp4
Gift_in_Shade.mp4
kanade in various other styles 8
kanade_in_Starry_Night.mp4
kanade_in_Monet.mp4
kanade_in_Azzalee.mp4
kanade_in_Colorful.mp4
kanade_in_Jitter_Doll.mp4
kanade_in_Shade.mp4
kanade_in_Abstract.mp4
kanade_in_bunnies.mp4
bunnies in various other styles 5
bunnies_in_Starry_Night.mp4
bunnies_in_Monet.mp4
bunnies_in_Azzalee.mp4
bunnies_in_Jitter_Doll.mp4
bunnies_in_Shade.mp4
cute in various other styles 5
cute_in_Starry_Night.mp4
cute_in_Monet.mp4
cute_in_Colorful.mp4
cute_in_Jitter_Doll.mp4
cute_in_Gift.mp4
kek in various other styles 2
kek_in_Jitter_Doll.mp4
kek_in_Shade.mp4
Tarantula reference:)
TARANTULA_in_Starry_Night.mp4
Any sort of help, especially regarding the QoS ( Quality of Service ) of the project, is appreciated. Feel free to open an issue in the Issues tab and discuss the possible changes there. As of now, neural-art would be in great need of a clean and friendly arguments handler ( i.e. the one the argparse
python package provides ) in order to accommodate to a cleaner interface for ./neuralart.py
and / or ./stylize.sh
.
Thank you. Happy neural-art-ing !