-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How much time did it take to train on FFHQ? #82
Comments
It costs 5 days to train FFHQ-1024x1024 (500k iterations) on 3090. |
@caopulan Thanks for sharing! Do you mean 50,000 iterations, I guess? |
In my experiment learning rate could safely be raised to 0.001. |
I'm sorry, I mean 500,000 iterations. I have corrected it. |
And I found distributed training is not effective. Training on 8 cards accelerates only 1.5~2x. |
Thanks for this great work!
I'm trying to train e4e from scratch on the face domain (mostly the same as FFHQ, but 512x512 resolution). Now it has been trained for 100k steps and the reconstruction results look fine so far.
The problem is, training proceeds very slowly. It is estimated to take more than one week to reach 300k steps on a single Tesla T4 GPU. I keep the size of the validation set to 1000, so the time consumed for evaluation is trivial.
My questions are as follows:
I know I should experiment with these myself, but since each trial takes a long time, so any suggestion will help.
I appreciate your kind reply.
The text was updated successfully, but these errors were encountered: