-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
oom #57
Comments
Hi. Your device has 15GB VRAM, which seems capable of running our system. Would you please restart the machine and rerun our system? When does this oom occur? Thanks. |
already reboot ,but not work python gradio_run.pyTotal VRAM 15102 MB, total RAM 30700 MB BrushNet inference: do_classifier_free_guidance is True During handling of the above exception, another exception occurred: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): |
I see. You could manually set |
how to set LOW_VRAM mode in MagicQuill/comfy/model_management.py |
Try to change lines 23-24 from
to
Let me know if it works. Thanks. |
regret, not work still oom torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 78.00 MiB. GPU 0 has a total capacty of 14.75 GiB of which 31.06 MiB is free. Including non-PyTorch memory, this process has 14.71 GiB memory in use. Of the allocated memory 14.36 GiB is allocated by PyTorch, and 203.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
Total VRAM 15102 MB, total RAM 30700 MB |
设置后没有效果。如何disable loading the LLaVA module and DrawNGuess? |
I see. @timchenxiaoyu @fallbernana123456. Just change the 22th line in gradio_run.py from
to
Then, you can disable the DrawNGuess by clicking the wand icon on above. You can still manually enter the prompt. |
Alternatively, @timchenxiaoyu @fallbernana123456. You may change the 456 line of
This shall force the model to be loaded in vram mode, but in much lower inference speed. |
thanks solve problem @zliucz |
设置 llavaModel = None 可以运行了。 |
Perhaps we should modify the readme to prepare laptop users with 8GB VRAM? Because the first boot prompts that memory overflow is really frustrating haha (because we also spent an hour downloading a 26GB larrrrrrrge file) Or pin this issue? |
nvidia-smi
Sun Nov 24 16:54:27 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 On | 00000000:00:07.0 Off | 0 |
| N/A 44C P8 10W / 70W | 3MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 14.75 GiB of which 13.06 MiB is free. Including non-PyTorch memory, this process has 14.73 GiB memory in use. Of the allocated memory 14.51 GiB is allocated by PyTorch, and 69.53 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The text was updated successfully, but these errors were encountered: