-
Notifications
You must be signed in to change notification settings - Fork 901
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kohya started using more VRAM for SDXL and using more than it should be #1131
Comments
currently it uses 15.7 GB minimum on Kaggle So with P100 gpu it works but that means people can't use much faster T4 and Kaggle gives dual T4 Also who has 16 GB GPUs can't use properly either |
With these options, Text Encoder 2 is trained with the learning rate=1e-5, because |
wow this is a bug in that case because this is what bmaltais gui generates - i will report him so when we don't provide TE2 what does trainer uses? because this is a big problem for me |
yep i verified this bug exists and breaks my config :/ thank you so much Kohya |
As I mentioned in #1141, multiple GPU issue seems to have another reason. |
I have a config which was running on Kaggle fine in previous versions
Right now it is failing on 15 GB gpu
This should not happen
Same settings on OneTrainer uses lesser than 13.5 GB VRAM
Here it fails with 15 GB
It wasn't failing before
All images are 1024x1024
All cached
Here the full training used prompt
I did trainings in past in Kaggle and this exact prompt was working i even have a video of it here
https://youtu.be/16-b1AjvyBE
The text was updated successfully, but these errors were encountered: