Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The training iterations #1

Open
yChenN1 opened this issue Nov 26, 2022 · 11 comments
Open

The training iterations #1

yChenN1 opened this issue Nov 26, 2022 · 11 comments

Comments

@yChenN1
Copy link

yChenN1 commented Nov 26, 2022

@WeilunWang
Thanks for your excellent work!
I try to train SDM, but I don't know when to stop training.
Could you give me some guidance for training iterations?

@xieenze
Copy link

xieenze commented Nov 30, 2022

I have the same question, I do not find the number of training steps in the code.

@DingLei14
Copy link

Same question. Does it stop itself?

@Noora555
Copy link

I have the same confusion.

@miquel-espinosa
Copy link

Hi all. Did anyone figure out this issue? How many iterations it needs to train on? How long until getting good results?

@RabJon
Copy link

RabJon commented May 22, 2023

I have the same question. It would be very nice if you could provide the exact hyperparameters (including the number of training steps) for all datasets you tested your approach on.

@Hcshenziyang
Copy link

+1

@HuangChiEn
Copy link

Hello folks, Ctrl + C is your first choice to interrupt the training.
that's also why they build the training-loop in the iteration-manner.

Although this code broke most of design pattern, but it works ~

Otherwise, you could add the condition to customize the training iteration or epoch.

@ucasligang
Copy link

ucasligang commented Aug 12, 2023

I have the same question, I do not find the number of training steps in the code and paper.

@HuangChiEn
Copy link

I have the same question, I do not find the number of training steps in the code and paper.

image

The training iteration is recorded in default config, you can change the default behavior ~

@ucasligang
Copy link

I have the same question, I do not find the number of training steps in the code and paper.

image

The training iteration is recorded in default config, you can change the default behavior ~

Training iterations, not diffusion steps. Do you know the meaning of training iterations?

@bhosalems
Copy link

bhosalems commented Oct 6, 2024

It is quite frustrating to not know when to stop so that you can compare your method fairly with theirs, I don't see any mention in the paper other than diffusion steps. in case of LDMs its usually very low as compared to them. But since this is pixel based they need more. Did anyone find it experimentally?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants