Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why my LPIPS distance is larger than what your paper say? #83

Open
Imposingapple opened this issue May 1, 2020 · 3 comments
Open

Why my LPIPS distance is larger than what your paper say? #83

Imposingapple opened this issue May 1, 2020 · 3 comments

Comments

@Imposingapple
Copy link

Thank you for the amazing job!
I've run the experiment of LPIPS distance metric and found that my LPIPS distance is larger than the one in your paper. Specific speaking:

  1. for random pairs of ground truth real images in the B, my average score is approximately 0.609 (in your paper it's the upperbound 0.262)
  2. for 100 test images, 19 pairs of generated outputs for each single input image A (I've read the issues before, and do the experiments according to what you have spoken), my average result is 0.261 (in your paper it's 0.110).

In my experment, I just use the pretrained weight of your maps (map photo -> aerial photo) model, where input size is [13256*256]. I use :
'python compute_dists_dirs.py -d0 imgs/ex_dir0 -d1 imgs/ex_dir1 -o imgs/example_dists.txt --use_gpu'
command in the above form to match the corresponding images in two dictionaries, just following the repository 'https://github.com/richzhang/PerceptualSimilarity' had said. The model options I use are 'net-lin','alex'.
I find my results are approximately in proportion like yours (0.609/0.262 approxiamtes 0.261/0.110), but I don't know why mine are larger than yours. I'm confused about this and looking forward to your reply, thank you very much!

@richzhang
Copy link
Collaborator

Thanks for the question. The discrepancy is caused by the version of LPIPS. BicycleGAN used v0.0. This version turned out to have a preprocessing bug. It is kept as a flag for historical purposes, but the recommended version of LPIPS is v0.1 (which is run by default and getting you the 0.609/0.262 numbers).

I updated the LPIPS repository so you can feed in the version number, but again it's best to use v0.1.

python compute_dists_pair.py -d DIRECTORY -o pair_dists.txt -v 0.0

Thanks!

@Imposingapple
Copy link
Author

Oh, I see. Thank you very very much! My results are much similar to yours now(0.1236, 0.293), but there's still some offsets(about 10% larger). Have you use any other tricks in calculating LPIPS score? Are you using pretrained net of Bicyclegan to acquire your result? In my experiment, the results I have mentioned is using the pretrained net and 512 times 512 image input.
Thank you again for answering my doubts!

@richzhang
Copy link
Collaborator

Great. I was trying the ground truth images when responding to this. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants