Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XLabs Sampler Error on MacOS: Torch not compiled with CUDA enabled #51

Open
109km opened this issue Aug 19, 2024 · 14 comments
Open

XLabs Sampler Error on MacOS: Torch not compiled with CUDA enabled #51

109km opened this issue Aug 19, 2024 · 14 comments

Comments

@109km
Copy link

109km commented Aug 19, 2024

OS: MacOS
ComfyUI Version: Newest
x-flux-comfyui Version: Newest

We are patching diffusion model, be patient please
Requested to load FluxClipModel_
Loading 1 new model
clip missing: ['text_projection.weight']
Requested to load Flux
Loading 1 new model
!!! Exception during processing !!! Torch not compiled with CUDA enabled
Traceback (most recent call last):
  File "/Users/xinhengs/AI/ComfyUI/execution.py", line 316, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xinhengs/AI/ComfyUI/execution.py", line 191, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xinhengs/AI/ComfyUI/execution.py", line 168, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Users/xinhengs/AI/ComfyUI/execution.py", line 157, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xinhengs/AI/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 289, in sampling
    if torch.cuda.is_bf16_supported():
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xinhengs/miniconda3/lib/python3.11/site-packages/torch/cuda/__init__.py", line 128, in is_bf16_supported
    device = torch.cuda.current_device()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xinhengs/miniconda3/lib/python3.11/site-packages/torch/cuda/__init__.py", line 778, in current_device
    _lazy_init()
  File "/Users/xinhengs/miniconda3/lib/python3.11/site-packages/torch/cuda/__init__.py", line 284, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
@fridayowl
Copy link

How to fix this ?

@ThiagoSousa
Copy link

Yes there is a bug on nodes.py line 303 in the sampling method. The main issue is that it's trying to load bf16 support from cuda, but that can't be done on mps.

I tried the following steps to fix it:

  • I commented lines 302-306 and just set dtype_model = torch.float16.
  • Doing that yields another error: "Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead." in math.py line 17, within rope method. Float64 is hardcoded there for this torch arange function. I hardcoded a torch.float32
  • Then, running again I had another error in a cast function:
Traceback (most recent call last):
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/execution.py", line 316, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/execution.py", line 191, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/execution.py", line 168, in _map_node_over_list
    process_inputs(input_dict, i)
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/execution.py", line 157, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 377, in sampling
    x = denoise_controlnet(
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 289, in denoise_controlnet
    pred = model_forward(
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 28, in model_forward
    img = model.img_in(img)
  File ".../miniconda3/envs/AIModelExploration/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File ".../miniconda3/envs/AIModelExploration/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/comfy/ops.py", line 65, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/comfy/ops.py", line 60, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/comfy/ops.py", line 41, in cast_bias_weight
    bias = cast_to(s.bias, dtype, device, non_blocking=non_blocking)
  File ".../Documents/PersonalProjects/AIModelExploration/ComfyUI/comfy/ops.py", line 25, in cast_to
    r.copy_(weight, non_blocking=non_blocking)
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.

Apparently Float8_e4m3fn is not supported in MPS but I couldn't change to float32 here. This issue seems to be liked to this other comfyanonymous/ComfyUI#4165 in comfyui. I tried the solutions here, but failed.

@edwios
Copy link

edwios commented Aug 23, 2024

Could change ComfyUI/custom_nodes/x-flux-comfyui/nodes.py to the followings (mainly line 302-303 below):

300         if torch.backends.mps.is_available():
301             device = torch.device("mps")
302             dtype_model = torch.float16#
303         elif torch.cuda.is_bf16_supported():
304             dtype_model = torch.bfloat16#
305         else:
306             dtype_model = torch.float16#

This is applicable to commit 202d197 only.

But then, other errors occurred as @ThiagoSousa mentioned above , so not working at all :_(
Force it to torch.float32 in ComfyUI/custom_nodes/x-flux-comfyui/xflux/src/flux/math.py like:

 17     scale = torch.arange(0, dim, 2, dtype=torch.float32, device=pos.device)     / dim

I have also changed to use Unet loader (GGUF) to load the Flux1 Dev model Flux1.dev-Q8_0.gguf or my machine (M1 Max 64GB) will run out of memory.

That would do with these changes. I can run the flow without problem.

(Edited to report successful run)

@darkcrocodile
Copy link

Could change ComfyUI/custom_nodes/x-flux-comfyui/nodes.py to the followings (mainly line 302-303 below):

300         if torch.backends.mps.is_available():
301             device = torch.device("mps")
302             dtype_model = torch.float16#
303         elif torch.cuda.is_bf16_supported():
304             dtype_model = torch.bfloat16#
305         else:
306             dtype_model = torch.float16#

This is applicable to commit 202d197 only.

But then, other errors occurred as @ThiagoSousa mentioned above , so not working at all :_( Force it to torch.float32 in ComfyUI/custom_nodes/x-flux-comfyui/xflux/src/flux/math.py like:

 17     scale = torch.arange(0, dim, 2, dtype=torch.float32, device=pos.device)     / dim

I have also changed to use Unet loader (GGUF) to load the Flux1 Dev model Flux1.dev-Q8_0.gguf or my machine (M1 Max 64GB) will run out of memory.

That would do with these changes. I can run the flow without problem.

(Edited to report successful run)

This made all the xlabs sampler nodes unusable....

@Th3Disasterpiece
Copy link

Is there a way to get this working in Macos?

@ThiagoSousa
Copy link

Could change ComfyUI/custom_nodes/x-flux-comfyui/nodes.py to the followings (mainly line 302-303 below):

300         if torch.backends.mps.is_available():
301             device = torch.device("mps")
302             dtype_model = torch.float16#
303         elif torch.cuda.is_bf16_supported():
304             dtype_model = torch.bfloat16#
305         else:
306             dtype_model = torch.float16#

This is applicable to commit 202d197 only.

But then, other errors occurred as @ThiagoSousa mentioned above , so not working at all :_( Force it to torch.float32 in ComfyUI/custom_nodes/x-flux-comfyui/xflux/src/flux/math.py like:

 17     scale = torch.arange(0, dim, 2, dtype=torch.float32, device=pos.device)     / dim

I have also changed to use Unet loader (GGUF) to load the Flux1 Dev model Flux1.dev-Q8_0.gguf or my machine (M1 Max 64GB) will run out of memory.

That would do with these changes. I can run the flow without problem.

(Edited to report successful run)

Any chance using the Flux1.dev-Q8_0.gguf would work on a MacM1 32GB? I haven't downloaded this model yet.

@zhoudeheng
Copy link

Error occurred when executing XlabsSampler: Torch not compiled with CUDA enabled File "/Users/dehengzhou/Desktop/ComfyUI_SDXL/execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/Users/dehengzhou/Desktop/ComfyUI_SDXL/execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/Users/dehengzhou/Desktop/ComfyUI_SDXL/execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "/Users/dehengzhou/Desktop/ComfyUI_SDXL/execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) File "/Users/dehengzhou/Desktop/ComfyUI_SDXL/custom_nodes/x-flux-comfyui/nodes.py", line 304, in sampling if torch.cuda.is_bf16_supported(): File "/Users/dehengzhou/miniconda3/lib/python3.10/site-packages/torch/cuda/init.py", line 128, in is_bf16_supported device = torch.cuda.current_device() File "/Users/dehengzhou/miniconda3/lib/python3.10/site-packages/torch/cuda/init.py", line 789, in current_device _lazy_init() File "/Users/dehengzhou/miniconda3/lib/python3.10/site-packages/torch/cuda/init.py", line 284, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled")

@paulVu
Copy link

paulVu commented Aug 27, 2024

Error occurred when executing KSampler:

Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.

File "/Users/po/Documents/ComfyUI/execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "/Users/po/Documents/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/nodes.py", line 1429, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/nodes.py", line 1396, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/samplers.py", line 829, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/samplers.py", line 729, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/samplers.py", line 716, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/samplers.py", line 695, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/samplers.py", line 600, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/miniconda3/envs/pytorch_env_311/lib/python3.11/site-packages/torch/utils/contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/k_diffusion/sampling.py", line 144, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/samplers.py", line 299, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/samplers.py", line 682, in call
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/samplers.py", line 685, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/samplers.py", line 279, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/samplers.py", line 228, in calc_cond_batch
output = model.apply_model(input_x, timestep
, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/model_base.py", line 142, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/miniconda3/envs/pytorch_env_311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/miniconda3/envs/pytorch_env_311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/ldm/flux/model.py", line 159, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/ldm/flux/model.py", line 104, in forward_orig
img = self.img_in(img)
^^^^^^^^^^^^^^^^
File "/Users/po/miniconda3/envs/pytorch_env_311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/miniconda3/envs/pytorch_env_311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/ops.py", line 76, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/ops.py", line 71, in forward_comfy_cast_weights
weight, bias = cast_bias_weight(self, input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/ops.py", line 50, in cast_bias_weight
bias = cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/po/Documents/ComfyUI/comfy/ops.py", line 28, in cast_to

@cl000100
Copy link

cl000100 commented Sep 2, 2024

可能会更改为以下内容(主要是下面的第 302-303 行):ComfyUI/custom_nodes/x-flux-comfyui/nodes.py

300         if torch.backends.mps.is_available():
301             device = torch.device("mps")
302             dtype_model = torch.float16#
303         elif torch.cuda.is_bf16_supported():
304             dtype_model = torch.bfloat16#
305         else:
306             dtype_model = torch.float16#

这仅适用于 commit。202d197

但是后来,如上所述发生了其他错误~,所以根本不起作用 :_(~ 强制它进入,例如:torch.float32``ComfyUI/custom_nodes/x-flux-comfyui/xflux/src/flux/math.py

 17     scale = torch.arange(0, dim, 2, dtype=torch.float32, device=pos.device)     / dim

我还改用 Unet 加载器 (GGUF) 来加载 Flux1 Dev 模型,否则我的机器 (M1 Max 64GB) 将耗尽内存。Flux1.dev-Q8_0.gguf

这与这些变化有关。我可以毫无问题地运行流。

(编辑以报告成功运行)

修改之后还是不能运行,提示同样的错误,要如何解决呢?

@kensiu7
Copy link

kensiu7 commented Sep 4, 2024

OS: MacOS ComfyUI Version: Newest x-flux-comfyui Version: Newest

We are patching diffusion model, be patient please
Requested to load FluxClipModel_
Loading 1 new model
clip missing: ['text_projection.weight']
Requested to load Flux
Loading 1 new model
!!! Exception during processing !!! Torch not compiled with CUDA enabled
Traceback (most recent call last):
  File "/Users/xinhengs/AI/ComfyUI/execution.py", line 316, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xinhengs/AI/ComfyUI/execution.py", line 191, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xinhengs/AI/ComfyUI/execution.py", line 168, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Users/xinhengs/AI/ComfyUI/execution.py", line 157, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xinhengs/AI/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 289, in sampling
    if torch.cuda.is_bf16_supported():
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xinhengs/miniconda3/lib/python3.11/site-packages/torch/cuda/__init__.py", line 128, in is_bf16_supported
    device = torch.cuda.current_device()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xinhengs/miniconda3/lib/python3.11/site-packages/torch/cuda/__init__.py", line 778, in current_device
    _lazy_init()
  File "/Users/xinhengs/miniconda3/lib/python3.11/site-packages/torch/cuda/__init__.py", line 284, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Same here

@digimanshq
Copy link

same error

@BoydRotgans
Copy link

same

@cadje
Copy link

cadje commented Sep 26, 2024

same here

@RambleRainbow
Copy link

I've tried all the steps mentioned above, but I'm still getting an error (which I've listed below).
I also tested another Flux ControlNet solution instantx from the official ComfyUI-examples,
and that one actually works fine.

/AppleInternal/Library/BuildRoots/c7c74b64-74b4-11ef-aeda-9635a580fe0d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:43:0: error: 'mps.matmul' op detected operation with both F16 and BF16 operands which is not supported
(mpsFileLoc): /AppleInternal/Library/BuildRoots/c7c74b64-74b4-11ef-aeda-9635a580fe0d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:43:0: note: see current operation: %5 = "mps.matmul"(%arg0, %4) <{transpose_lhs = false, transpose_rhs = false}> : (tensor<1x4096x64xf16>, tensor<64x3072xbf16>) -> tensor<1x4096x3072xf16>
(mpsFileLoc): /AppleInternal/Library/BuildRoots/c7c74b64-74b4-11ef-aeda-9635a580fe0d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:43:0: error: 'mps.matmul' op detected operation with both F16 and BF16 operands which is not supported
(mpsFileLoc): /AppleInternal/Library/BuildRoots/c7c74b64-74b4-11ef-aeda-9635a580fe0d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:43:0: note: see current operation: %5 = "mps.matmul"(%arg0, %4) <{transpose_lhs = false, transpose_rhs = false}> : (tensor<1x4096x64xf16>, tensor<64x3072xbf16>) -> tensor<1x4096x3072xf16>
/AppleInternal/Library/BuildRoots/c7c74b64-74b4-11ef-aeda-9635a580fe0d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:988: failed assertion `original module failed verification'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests