You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the amazing job!
I've run the experiment of LPIPS distance metric and found that my LPIPS distance is larger than the one in your paper. Specific speaking:
for random pairs of ground truth real images in the B, my average score is approximately 0.609 (in your paper it's the upperbound 0.262)
for 100 test images, 19 pairs of generated outputs for each single input image A (I've read the issues before, and do the experiments according to what you have spoken), my average result is 0.261 (in your paper it's 0.110).
In my experment, I just use the pretrained weight of your maps (map photo -> aerial photo) model, where input size is [13256*256]. I use :
'python compute_dists_dirs.py -d0 imgs/ex_dir0 -d1 imgs/ex_dir1 -o imgs/example_dists.txt --use_gpu'
command in the above form to match the corresponding images in two dictionaries, just following the repository 'https://github.com/richzhang/PerceptualSimilarity' had said. The model options I use are 'net-lin','alex'.
I find my results are approximately in proportion like yours (0.609/0.262 approxiamtes 0.261/0.110), but I don't know why mine are larger than yours. I'm confused about this and looking forward to your reply, thank you very much!
The text was updated successfully, but these errors were encountered:
Thanks for the question. The discrepancy is caused by the version of LPIPS. BicycleGAN used v0.0. This version turned out to have a preprocessing bug. It is kept as a flag for historical purposes, but the recommended version of LPIPS is v0.1 (which is run by default and getting you the 0.609/0.262 numbers).
I updated the LPIPS repository so you can feed in the version number, but again it's best to use v0.1.
Oh, I see. Thank you very very much! My results are much similar to yours now(0.1236, 0.293), but there's still some offsets(about 10% larger). Have you use any other tricks in calculating LPIPS score? Are you using pretrained net of Bicyclegan to acquire your result? In my experiment, the results I have mentioned is using the pretrained net and 512 times 512 image input.
Thank you again for answering my doubts!
Thank you for the amazing job!
I've run the experiment of LPIPS distance metric and found that my LPIPS distance is larger than the one in your paper. Specific speaking:
In my experment, I just use the pretrained weight of your maps (map photo -> aerial photo) model, where input size is [13256*256]. I use :
'python compute_dists_dirs.py -d0 imgs/ex_dir0 -d1 imgs/ex_dir1 -o imgs/example_dists.txt --use_gpu'
command in the above form to match the corresponding images in two dictionaries, just following the repository 'https://github.com/richzhang/PerceptualSimilarity' had said. The model options I use are 'net-lin','alex'.
I find my results are approximately in proportion like yours (0.609/0.262 approxiamtes 0.261/0.110), but I don't know why mine are larger than yours. I'm confused about this and looking forward to your reply, thank you very much!
The text was updated successfully, but these errors were encountered: