Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question for training setting on the Waymo dataset #19

Open
lejk8104 opened this issue Nov 10, 2023 · 1 comment
Open

Question for training setting on the Waymo dataset #19

lejk8104 opened this issue Nov 10, 2023 · 1 comment

Comments

@lejk8104
Copy link

lejk8104 commented Nov 10, 2023

Hi, I have some questions the process for the Waymo dataset to reproduce of HSSDA results.

Regarding the EMA function:

  1. I've noticed several changes in the EMA function compared to 3DIouMatch. Could you elaborate on the reasons for these modifications? Are they aimed at enhancing performance?
  2. I consider "change_global_step" an important hyperparameter. Was this parameter also set to 2000 for the Waymo Dataset?

Screenshot from 2023-11-10 13-49-41

Additional questions related to the training settings on the Waymo dataset:
3. The number of epochs for the Waymo dataset appears significantly lower compared to the 80 epochs(Kitti dataset). Is there a specific reason for this configuration?

Screenshot from 2023-11-10 16-27-18

  1. I am interested in any other training settings on the Waymo Dataset that might not have been explicitly mentioned in your paper. For example, I presume the INTERVAL refers to the cycle for generating the dual dynamic threshold and the GT sampling database; is my understanding correct? Additionally, how did you determine the INTERVAL setting for the Waymo Dataset?

Screenshot from 2023-11-10 17-03-30

@azhuantou
Copy link
Owner

Hi, @lejk8104

  1. We follow the code in unbiaised-teacher with the aim to enhance the performance, but I find this setting has a minimal impact for our HSSDA. For example, if you set ema_keep_rate directly to the fixed value of 0.999, you may find that it doesn't have much effect on the final performance.

  2. The "change_global_step" is the same for the Waymo dataset.

  3. This is due to the fact that the Waymo dataset has a larger amount of data compared to the KITTI dataset and requires more time for training. Meanwhile, I found that training a smaller number of epochs still achieved good performance and did not try more parameters.

  4. Your understanding is correct. We did not spend time finding the optimal INTERVAL parameter on the Waymo dataset and set it directly to the same as the KITTI.

For the Waymo dataset, we did not use additional hyperparameters, and the only difference from the KITTI dataset is that there are fewer epochs. If you try some different parameters on the Waymo dataset, I think you can get higher performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants