Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluation metric questions #2

Open
JoonHo-Jang opened this issue Oct 15, 2021 · 1 comment
Open

evaluation metric questions #2

JoonHo-Jang opened this issue Oct 15, 2021 · 1 comment

Comments

@JoonHo-Jang
Copy link

I want to ask you whether the calculation of 'known_acc' is correct or not, which is line number 169 in eval.py.

In my understanding, 'per_class_acc' in the function of 'test' has a length of (n.share+1). And, since 'open_class' is defined as "open_class = int(out_t.size(1))" in line 108, it is a "num_class = n_share + n_source_private".

However, the known accuracy is calculated by "known_acc = per_class_acc[:open_class - 1].mean()" in line 169.

Thus, when n_source_private>0, i think it covers all dimensions of per_class_acc.

In my thinking, it should be "known_acc = per_class_acc[:- 1].mean()" in line 169.

Please clarify my concerns.
Thank you.

@ksaito-ut
Copy link
Collaborator

Sorry for the confusion, and thanks for pointing out the issue.
I have changed the line.
See #1 about the discussion.

The changed one "known_acc = per_class_acc[:len(class_list)-1].mean()" should be fine, but, I think your answer is better.
I will keep the current version for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants