Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug report on running evaluation #10

Closed
zjumsj opened this issue Nov 28, 2024 · 3 comments
Closed

Bug report on running evaluation #10

zjumsj opened this issue Nov 28, 2024 · 3 comments

Comments

@zjumsj
Copy link

zjumsj commented Nov 28, 2024

Thanks to the author for the great work! I found a suspected bug around line 113 in flare/dataset/dataset_real.py and hope this information is helpful to others.

for i in range(len(json_dict)):    
    flame_expression = torch.tensor(json_dict["frames"][i]["expression"], dtype=torch.float32)
    all_expression.append(flame_expression[None, ...])

Negative impact: len(json_dict) always returns 1, causing the expected average expression to instead return the expression parameters of the first frame. This bug does not affect training, but generates incorrect evaluation results when running python test.py ....

Bugfix: range(len(json_dict)) -> range(len(json_dict["frames"]))

See also my attached patch
dataset_real.patch

@sbharadwajj
Copy link
Owner

sbharadwajj commented Nov 28, 2024

Hi,

Thanks for your comment. Let me verify this. It's possible I introduced a bug while cleaning the code.

Ideally, for testing, we would still want to load the mean expression of the training dataste because we wont have access to the mean expression of the test set. Since our input here is the training directory, it should directly load the train json file and behave the same way as training.

EDIT: I quickly checked, and you are correct that it's loading the expression of the first frame as the mean frame, but instead, it should load the same mean expression that we used for training to have consistency during mapping. Thanks for this catch :) I will edit it. Ideally, now the test evaluations must be better than before.

@sbharadwajj
Copy link
Owner

sbharadwajj commented Dec 4, 2024

Hi,
I finished verifying the numbers and will update the code.

Note for all the users: if your evaluation was completed while running train.py, then your results/numbers are not affected. This change only affects those who used test.py after the training was completed to get numbers.

After this fix, running test.py should give the same numbers as train.py.

@zjumsj thanks a lot for the catch!

@zjumsj
Copy link
Author

zjumsj commented Dec 4, 2024

Thank you for your reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants