-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The training iterations #1
Comments
I have the same question, I do not find the number of training steps in the code. |
Same question. Does it stop itself? |
I have the same confusion. |
Hi all. Did anyone figure out this issue? How many iterations it needs to train on? How long until getting good results? |
I have the same question. It would be very nice if you could provide the exact hyperparameters (including the number of training steps) for all datasets you tested your approach on. |
+1 |
Hello folks, Ctrl + C is your first choice to interrupt the training. Although this code broke most of design pattern, but it works ~Otherwise, you could add the condition to customize the training iteration or epoch. |
I have the same question, I do not find the number of training steps in the code and paper. |
The training iteration is recorded in default config, you can change the default behavior ~ |
Training iterations, not diffusion steps. Do you know the meaning of training iterations? |
It is quite frustrating to not know when to stop so that you can compare your method fairly with theirs, I don't see any mention in the paper other than diffusion steps. in case of LDMs its usually very low as compared to them. But since this is pixel based they need more. Did anyone find it experimentally? |
@WeilunWang
Thanks for your excellent work!
I try to train SDM, but I don't know when to stop training.
Could you give me some guidance for training iterations?
The text was updated successfully, but these errors were encountered: