-
Notifications
You must be signed in to change notification settings - Fork 10.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA out of memory, despite having enough memory to run #296
Comments
I got the same issue,anyone can fix it ? |
What settings are you running at? |
Im running the example command provided in the readme.
…On Sat, Sep 17, 2022, 11:53 PM tonsOfStu ***@***.***> wrote:
What settings are you running at?
I also have a 12GB card and it can not fit batch size of 4 or anything
above 640x640
—
Reply to this email directly, view it on GitHub
<#296 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AM5NYZJPASUQ6BEQQ4FTPHTV62G3BANCNFSM6AAAAAAQPGOMVU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I found useful to disable hardware acceleration on web browsers, and also keep as many things as possible closed. You can use nvitop to monitor what processes are consuming memory of your GPU. |
Try lowering resolution to see if it works. |
I Run on a 1080ti and can create 1024x1024 (actually even higher, but it takes a while to generate 1344x1344 for example). |
I got the same issue, and I check the GPU (823MiB / 12288MiB) Before figure out problem of out of cuda memory, |
This is what AUTOMATIC1111's version does by default. I couldn't see any difference in the images with half or single floats using the same seed (except that it used less VRAM). Another note - the default batch size (the option is called --n_samples) is 3, which is JUST over the limit on a 12GB machine in practice, because it tries to generate 3 at once. If you want to just get it to work without using half precision, you can reduce it to 2 or less. |
Thank you so much!!! |
I can confirm that applying the diff from @smoran's branch fixed this issue for me. Thanks! |
pull request #177 solves the problem |
When trying to run prompts, I get the error
My card has 12GB of VRAM, which should be enough to run stable-diffusion.
The text was updated successfully, but these errors were encountered: