Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cquantize_blockwise_fp16_nf4 #22

Open
IRL-JPEG opened this issue Dec 21, 2024 · 0 comments
Open

cquantize_blockwise_fp16_nf4 #22

IRL-JPEG opened this issue Dec 21, 2024 · 0 comments

Comments

@IRL-JPEG
Copy link

Still struggling to get install in my colab-comfy. Lots of dependency issues solved and put models in correct locations. But still no joy?

Thanks in advance

We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).
Loading checkpoint shards:   0% 0/3 [00:54<?, ?it/s]
Traceback (most recent call last):
  File "/content/drive/MyDrive/comfyuiaug24/ComfyUI/nodes.py", line 1998, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/content/drive/MyDrive/comfyuiaug24/ComfyUI/custom_nodes/ComfyUI_MagicQuill/__init__.py", line 20, in <module>
    from .magic_quill import MagicQuill
  File "/content/drive/MyDrive/comfyuiaug24/ComfyUI/custom_nodes/ComfyUI_MagicQuill/magic_quill.py", line 104, in <module>
    class MagicQuill(object):
  File "/content/drive/MyDrive/comfyuiaug24/ComfyUI/custom_nodes/ComfyUI_MagicQuill/magic_quill.py", line 106, in MagicQuill
    llavaModel = LLaVAModel()
  File "/content/drive/MyDrive/comfyuiaug24/ComfyUI/custom_nodes/ComfyUI_MagicQuill/llava_new.py", line 29, in __init__
    self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
  File "/content/drive/MyDrive/comfyuiaug24/ComfyUI/custom_nodes/ComfyUI_MagicQuill/LLaVA/llava/model/builder.py", line 122, in load_pretrained_model
    model = LlavaLlamaForCausalLM.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3850, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 4284, in _load_pretrained_model
    new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
  File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 839, in _load_state_dict_into_meta_model
    set_module_quantized_tensor_to_device(model, param_name, param_device, value=param)
  File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/bitsandbytes.py", line 121, in set_module_quantized_tensor_to_device
    new_value = bnb.nn.Params4bit(new_value, requires_grad=False, **kwargs).to(device)
  File "/usr/local/lib/python3.10/dist-packages/bitsandbytes/nn/modules.py", line 331, in to
    return self._quantize(device)
  File "/usr/local/lib/python3.10/dist-packages/bitsandbytes/nn/modules.py", line 296, in _quantize
    w_4bit, quant_state = bnb.functional.quantize_4bit(
  File "/usr/local/lib/python3.10/dist-packages/bitsandbytes/functional.py", line 1236, in quantize_4bit
    lib.cquantize_blockwise_fp16_nf4(*args)
AttributeError: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_nf4'

Cannot import /content/drive/MyDrive/comfyuiaug24/ComfyUI/custom_nodes/ComfyUI_MagicQuill module for custom nodes: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_nf4'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant