Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When training the lmo dataset, all results are 0, and the translation and rotation errors are large. #27

Open
Fusica opened this issue Oct 23, 2024 · 2 comments

Comments

@Fusica
Copy link

Fusica commented Oct 23, 2024

I encountered several issues when trying to train on the lom dataset and reproduce the author's results:

  1. First, I found that in lmo2poet.py, the division method for the lmo dataset is quite different from what I downloaded from bop. The lmo downloaded from bop does not have the train_pbr and train_synt subfolders. I would like to ask which version of lmo the author used.

  2. Then, when I loaded lmo_maskrcnn_checkpoint.pth.tar during training, the ADD was all 0. I want to know if it's because I made a mistake in the dataset generation step, or if there is something wrong with my training hyperparameters. I have referred to some of the hyperparameter settings provided in the author's link, but the results have not improved at all.

I sincerely hope the author can point out where my problem lies, as I am very eager to follow your work.

@Fusica
Copy link
Author

Fusica commented Oct 23, 2024

I tried using the command python main.py --epochs 50 --batch_size 24 --output_dir outputs/202410231444 --resume weights/poet_lmo_maskrcnn.pth --eval to test the pth file you provided, and I got the following results. Does this match your results, and does this result prove that there is no problem with my dataset?

image image image image image

The angle error feels large, with 40°.

@tgjantos
Copy link
Member

Hi @Fusica ,

I am really sorry that you faced these issues with the LM-O dataset. Regarding the download links I used the following, however I realize that they switched to huggingface for hosting their data:

train_pbr: https://huggingface.co/datasets/bop-benchmark/datasets/resolve/main/lm/lm_train_pbr.zip
train_synt: https://huggingface.co/datasets/bop-benchmark/datasets/resolve/main/lmo/lmo_train.zip
train_synt: https://huggingface.co/datasets/bop-benchmark/datasets/resolve/main/lm/lm_train.zip

I am not sure, whether the data changed over time.

Regarding the performance: I am aware of an issue that the LM-O performance is not reproducible. We have a network that is not performing better, but we still need some time before we can release it. I hope it does not hinder your work! I am really sorry.

Best,
Thomas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants