You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You said that you use the command "python pretraining. py", while I am not sure whether you are reproducing zero-shot classification or the image-depth pre-training.
If you mean zero-shot, you can use the following command,
python zeroshot.py --ckpt [pre-trained_ckpt_path]
Otherwise, your claimed results (36.71% and 32.70%) may come from the validation set of our pre-training, which is a little bit different from our zero-shot setting. The accuracy of the validation set can reach 42.83% during our pre-training. And we find that the batch size can significantly affect the pre-training.
See if these can help you.
We found a bug in the pre-training code, and have already fixed it in the latest commit. We wrongly rotate the CAD models in ShapeNet when rendering depth maps. Rotation is needed only in downstream tasks.
Sorry for the confusion. Please let us know if we can assist with anything else.
How can I reproduce the zero-shot reasoning on scanobjectnn to achieve an accuracy of 35.46, when I can only achieve 13%? And what changes do I need to make to the code?
First of all, thanks for sharing your outstanding work
I try to train zero-shot (python pretraining. py) on ModelNet40Align and ModelNet40Ply,
the results can only reach 36.71% and 32.70%
But the result in the paper is 49.38%.
Can you retrain and get 49.38% results? What should I pay attention to during the training
Thanks!
The text was updated successfully, but these errors were encountered: