-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to understand TVLoss? #302
Comments
@jcjohnson Could you please give me some help? |
The total variation (TV) loss encourages spatial smoothness in the generated image. It was not used by Gatys et al in their CVPR paper but it can sometimes improve the results; for more details and explanation see Mahendran and Vedaldi "Understanding Deep Image Representations by Inverting Them" CVPR 2015. |
Thank you! @jcjohnson Your answer is so helpful for me. |
I ran some test where I produce the same style transfer using various tv values. I noted the total style loss and noticed that the smallest the loss for the style the better looking the resulting image was. Here are the results I got:
So based on my testing the best tv value was 0.000085 Results may vary based on src and dst images... but the nice thing is that you only need something like 250 iterations to pick the winner... so try 0.000085, 0.0001 or 0.0002 and see which is best for you. |
This is the first time that I use Torch and Lua. I read the CVPR paper Image Style Transfer Using Convolutional Neural Networks and code neural_style.lua of this repository. I can not understand the TVLoss module in the code. What is it used for? I do not find any description or discussion on the TVLoss in the CVPR paper. Only Content Loss and Style Loss are proposed in the paper. Could anyone give me some help?
The text was updated successfully, but these errors were encountered: