You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(magicquill) D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI>python main.py --novram
Total VRAM 4096 MB, total RAM 32666 MB
pytorch version: 2.5.1+cu118
Set vram state to: NO_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1650 : native
Using pytorch attention
[Prompt Server] web root: D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\web
[comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
['D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\comfy', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI', 'C:\Users\diffu\anaconda3\python312.zip', 'C:\Users\diffu\anaconda3\DLLs', 'C:\Users\diffu\anaconda3\Lib', 'C:\Users\diffu\anaconda3', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\magicquill', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\magicquill\Lib\site-packages', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\comfy_extras']
D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\magicquill\Lib\site-packages\huggingface_hub\file_download.py:795: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
warnings.warn(
We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set max_memory in to a higher value to use more memory (at your own risk).
Traceback (most recent call last):
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\nodes.py", line 2089, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 995, in exec_module
File "", line 488, in call_with_frames_removed
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill_init.py", line 20, in
from .magic_quill import MagicQuill
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 104, in
class MagicQuill(object):
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 106, in MagicQuill
llavaModel = LLaVAModel()
^^^^^^^^^^^^
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill\llava_new.py", line 29, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill\LLaVA\llava\model\builder.py", line 122, in load_pretrained_model
model = LlavaLlamaForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\magicquill\Lib\site-packages\transformers\modeling_utils.py", line 3790, in from_pretrained
raise ValueError(
ValueError:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
for more details.
Cannot import D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill module for custom nodes:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
for more details.
The text was updated successfully, but these errors were encountered:
KillyTheNetTerminal
changed the title
GTX 1650 4GB Vram and 32GB RAM. This is just a limitation?
GTX 1650 4GB Vram and 32GB RAM. Is this just a hardware limitation?
Dec 28, 2024
The only obstacle to running MagicQuill on 4GB vram, is the prompt guessing feature. However, if you don't mind putting in the prompts manually, you'll be happy to know it runs fantastically on 4GB. Here's how I got it running on an RTX3050:
Close the ComfyUI interface and stop the server.
Go to /ComfyUI/custom_nodes/ComfyUI_MagicQuill and start by making a backup of magic_quill.py in case you want to revert.
Open magic_quill.py (not the backup) in a text editor.
Mute the following lines with a # sign as I did below. (Do NOT mute any lines with @classmethod except the one directly above the code snippet as shown below):
When you load the MagicQuill node, you must click the guess button to turn it off before doing anything else or it will probably not work. It won't crash Comfy though. I haven't gone to the effort of trying to find how to switch it off by default yet.
Important to note is that you will need to to give ComfyUI a hard restart (like in no1 above) if you run any process before trying to run MagicQuill or it will OOM. A reboot or memory dump from within Comfy won't solve it because Comfy's efficiency works against you in this case. So if you generate some pictures that you want to edit in MagicQuill, you will need to do a hard restart between generating the pictures and running MagicQuill.
(magicquill) D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI>python main.py --novram
Total VRAM 4096 MB, total RAM 32666 MB
pytorch version: 2.5.1+cu118
Set vram state to: NO_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1650 : native
Using pytorch attention
[Prompt Server] web root: D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\web
[comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
['D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\comfy', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI', 'C:\Users\diffu\anaconda3\python312.zip', 'C:\Users\diffu\anaconda3\DLLs', 'C:\Users\diffu\anaconda3\Lib', 'C:\Users\diffu\anaconda3', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\magicquill', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\magicquill\Lib\site-packages', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes', 'D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\comfy_extras']
D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\magicquill\Lib\site-packages\huggingface_hub\file_download.py:795: FutureWarning:
resume_download
is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, useforce_download=True
.warnings.warn(
We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set
max_memory
in to a higher value to use more memory (at your own risk).Traceback (most recent call last):
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\nodes.py", line 2089, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 995, in exec_module
File "", line 488, in call_with_frames_removed
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill_init.py", line 20, in
from .magic_quill import MagicQuill
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 104, in
class MagicQuill(object):
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 106, in MagicQuill
llavaModel = LLaVAModel()
^^^^^^^^^^^^
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill\llava_new.py", line 29, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill\LLaVA\llava\model\builder.py", line 122, in load_pretrained_model
model = LlavaLlamaForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\magicquill\Lib\site-packages\transformers\modeling_utils.py", line 3790, in from_pretrained
raise ValueError(
ValueError:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set
load_in_8bit_fp32_cpu_offload=True
and pass a customdevice_map
tofrom_pretrained
. Checkhttps://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
for more details.
Cannot import D:\ComfyYUAIS\ComfyUIMagicQuill\ComfyUI\custom_nodes\ComfyUI_MagicQuill module for custom nodes:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set
load_in_8bit_fp32_cpu_offload=True
and pass a customdevice_map
tofrom_pretrained
. Checkhttps://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
for more details.
The text was updated successfully, but these errors were encountered: