-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug report on running evaluation #10
Comments
Hi, Thanks for your comment. Let me verify this. It's possible I introduced a bug while cleaning the code. Ideally, for testing, we would still want to load the mean expression of the training dataste because we wont have access to the mean expression of the test set. Since our input here is the training directory, it should directly load the train json file and behave the same way as training. EDIT: I quickly checked, and you are correct that it's loading the expression of the first frame as the mean frame, but instead, it should load the same mean expression that we used for training to have consistency during mapping. Thanks for this catch :) I will edit it. Ideally, now the test evaluations must be better than before. |
Hi, Note for all the users: if your evaluation was completed while running After this fix, running test.py should give the same numbers as train.py. @zjumsj thanks a lot for the catch! |
Thank you for your reply. |
Thanks to the author for the great work! I found a suspected bug around line 113 in
flare/dataset/dataset_real.py
and hope this information is helpful to others.Negative impact:
len(json_dict)
always returns 1, causing the expected average expression to instead return the expression parameters of the first frame. This bug does not affect training, but generates incorrect evaluation results when runningpython test.py ...
.Bugfix:
range(len(json_dict))
->range(len(json_dict["frames"]))
See also my attached patch
dataset_real.patch
The text was updated successfully, but these errors were encountered: