Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is a bug in the evaluation code #15

Open
Negai-98 opened this issue Nov 28, 2023 · 1 comment
Open

There is a bug in the evaluation code #15

Negai-98 opened this issue Nov 28, 2023 · 1 comment

Comments

@Negai-98
Copy link

In 'uncond_metrics.py' at line 41, the 'pairwise' in methods like LION, PF, PVD, etc., is centered around the test samples, whereas in your code, it is centered around the generated samples. To align with other methods for comparison, you should modify:
M_rs_cd,M_rs_emd = _pairwise_EMD_CD_(sample_pcs, ref_pcs, batch_size, EMD_flag, verbose=True)
to
M_rs_cd, M_rs_emd = _pairwise_EMD_CD_(ref_pcs, sample_pcs, batch_size, EMD_flag, verbose=True)
This might be the reason for the notable performance difference in the supplementary materials section of your paper compared to other papers.
Additionally, could you provide generated samples for the unconditional generation task or share the corrected evaluation results for quantitative comparison with your method?

@colorful-liyu
Copy link
Owner

Thank you for your carefulness. I checked the _pairwise_EMD_CD_() in PVD official code just now in https://github.com/alexzhou907/PVD/blob/main/metrics/evaluation_metrics.py. I am actually confused, the variable names they used in the definition and invocation of _pairwise_EMD_CD_ are different. I will remeasure the metrics in a few days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants