-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the re-trained results on HO3D dataset #5
Comments
hello,have you used multiple GPUs for training?I can't even get such a good result as yours.. |
option.txt:
train.txt:
The above are the running results of 60 epochs. I used the predicted JSON file of the 60th epoch to evaluate the results on codalab as follows:
It seems that my results are much worse than yours! |
Your model seems to not converge since the mano_joints3d_loss is still very high. Have you tried to train on another machine? I tried on multiple machines and 9.38 is the best one :( |
May I ask what type of GPU you are using? Have you ever had a situation where the convergence effect is poor just like mine? |
hello!About the file ho3d.py in folder 'dataset':
Do you use the file 'ho3d_train_data.json' provided by the author? Its content is as follows: I don't think it contains enough information. So I used the preprocessed JSON file provided by its baseline "https://github.com/stevenlsw/Semi-Hand-Object" ,can u tell me how you handle this? Thank you. |
hello,after alter,i have tried twice (the first time was the configuration of the original code,the second time was when I fine-tuned the settings),here is my second result(More effective than first one): options.txt
train.txt:
codalab.stout_epoch_70.txt:
It seems that there is still a certain gap between my results and yours(9.38). How do you gain the result?Have you received results similar to mine? I am looking forward to your reply~ |
Hello, I would like to ask you how you successfully reproduced the code I encountered this error during the replication process and was unable to find/data1/zhifeng/hoddv2/evaluation. txt . May I ask if you encountered this situation while running the code? Looking forward to your reply |
Hello, during the replication process, cuda has been continuously unavailable. torch. cuda. is-available(): False. Have you encountered this issue? How to solve it? |
|
Modify line 3 of the traineval.py file |
Hello, @lzfff12 . Thanks for your excellent work.
I tried to retrain the model on the HO3Dv2 dataset using your default settings for three times. However, the best result I obtained is only 9.38 on the public leaderboard.
Therefore, I wonder if you have some other tricks applied during training to boost the performance?
The following are the parameter output in
option.txt
and the trained losses for reference.I am looking forward to your reply. And thanks in advance for your help!
The text was updated successfully, but these errors were encountered: