-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: (WSL) xFormers wasn't build with CUDA support #6871
Comments
what gpu do you have? |
Originally I had dual 3090's for this, but I pulled one out when I was reinstalling. Xformers should just work when --xformers is placed in webui-user.sh. |
trying with python 3.10.6? is this a SD2.0 model? |
Its on linux, I could not get 3.10.6, it would only install the latest 3.10.9, Running the ./webui.sh would use 3.9.12. The model is also just a SD1.5 model. |
3.10.X is supposed to work, so since its linux, you built the xformers wheel manually before, and it worked, but doesn't anymore? check your .whl filename, maybe it was a different python version? |
Alright, sorry if im making little sense, its late and i've been at this for a few hours.
|
Also it could just be the case of WSL ubuntu being stupid. Nothing is running at the moment. |
Do you absolutely need to run with WSL? |
Alright, looks like I got 3.10.9 working, but for some reason it would not find my gpu Traceback (most recent call last): Really this just ended up being a whole rabbit hold of reinstalling and uninstalling old versions, same versions etc just to somehow get it working again. |
yes, use 11.8 |
Im not sure what to run to downgrade to 11.8 wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.0-1_all.deb Nvidai-smi still reports that its on cuda verison 12 |
Did you set this up with xformers working before in WSL? why not use the old .whl and old commit or you forgot which one it was? |
I did have xformers working before. It was working on a older commit, but I dont remember which version, since I had a issue where it was truncating my prompt past 75 tokens. That is when I used git pull, but it was not properly updating the files which then I was told by people in the project ai server to just reinstall the entire thing. |
so you deleted the folder? :( |
Sadly yes, I was told so ;-; |
if you remember the date maybe we could figure the commit version, or you can keep trying |
I could try to see if I can find a old screenshot of the version, I do know that it was before the change where height and width was only in steps of 64. |
Yeah, im not able to find much on the older commit version that it was on. Now it really much is just test more tomorrow and try to see if I can downgrade cuda from 12 to 11.8 or lower. |
K, idk why but for some reason wsl ubuntu would not let me downgrade cuda from 12 to 11.8 even with a full cuda reinstall and driver reinstall. |
I have the same issue with linux mint 21. I'm running python 3.8 in an anaconda env with cudatoolkit 11.3.1. |
Same problem. Running GNU/Debian 11. |
Same problem, things I've tried below. It started today and I'm sure it's an easy fix, but it might not be, so:
|
Could possible try:
In one of these steps maybe there will be some sort of error that shows up that might be swallowed otherwise |
renaming the venv folder to x-venv fixed the issue for me. |
unfortunately those steps you outlined, @atensity , didn't work for me, i'm getting this error:
|
I'm getting this error Win10 running just via CLI, so it doesn't look completely isolated to WSL |
I guess I should note I'm using bare metal university server 22.04, not WSL, if that helps identify the scope of the issue |
I had the same issue as OP, also running on Linux. I could make it work by:
output:
|
What commit were you using |
For web-ui? I just did a fresh install today. |
Thanks I'll give that a shot and report back |
The only solution I have found to work so far is just remove |
I actually got xformers working following @chrisburrc's steps with latest. I now have "Can't initialize NVML" warnings, but they don't seem consequential. |
I had the same problem on RTX3090/ubuntu 20.04
i solved it this way.
|
this also solved it for me. |
Such a simple solution! Worked right away. It just re-installed all requirements and ran perfectly afterwards |
I try |
same thing for me ,I have a 6700xt on linux and it did work after renaming venv to x-venv, i guess deleting will be okay too |
Just in case, it's better to remove Python 3.10, the Хformers with all the additional files and install everything again. У меня так же, начались проблемы с совместимостью, уже была такая ситуация, и мне помогла полная переустановка вспомогательных компонентов.
https://pytorch.org/get-started/previous-versions/ TIP: make the Python folder separate from the system Python, for example, install or copy to the A1111 folder, and already carry out these manipulations with the installation of components in it. First, you need to specify the path to the Python folder. |
Is there an existing issue for this?
What happened?
Had to reinstall webui cause git pull was not updating everything properly.
Used --xformers, installed normally, generate a image, Will get a long string of error saying that xFormers wasn't build with Cuda support. Manually try to reinstall xFormers, Still same issue.
Steps to reproduce the problem
--xformers in webui-user.sh
generate a image
What should have happened?
Xformers should be properly build for Cuda support
Commit where the problem happens
dac59b9
What platforms do you use to access UI ?
Linux
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
Additional information, context and logs
Error completing request
Arguments: ('task(csqxfz4flxydfi1)', -', 'None', 'None', 30, 15, False, False, 1, 1, 11, -1.0, -1.0, 0, 0, 0, False, 512, 696, False, 0.7, 2, 'Latent', 0, 0, 0, 0, False, False, False, False, '', 1, '', 0, '', True, False, False) {}
Traceback (most recent call last):
File "/home/shieri/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/home/shieri/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/home/shieri/stable-diffusion-webui/modules/txt2img.py", line 52, in txt2img
processed = process_images(p)
File "/home/shieri/stable-diffusion-webui/modules/processing.py", line 480, in process_images
res = process_images_inner(p)
File "/home/shieri/stable-diffusion-webui/modules/processing.py", line 609, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/home/shieri/stable-diffusion-webui/modules/processing.py", line 801, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/home/shieri/stable-diffusion-webui/modules/sd_samplers.py", line 544, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/home/shieri/stable-diffusion-webui/modules/sd_samplers.py", line 447, in launch_sampling
return func()
File "/home/shieri/stable-diffusion-webui/modules/sd_samplers.py", line 544, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/shieri/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shieri/stable-diffusion-webui/modules/sd_samplers.py", line 350, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [tensor[a:b]], "c_concat": [image_cond_in[a:b]]})
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shieri/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/home/shieri/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/home/shieri/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shieri/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shieri/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shieri/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
x = layer(x, context)
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shieri/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 324, in forward
x = block(x, context=context[i])
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shieri/stable-diffusion-webui/modules/sd_hijack_checkpoint.py", line 4, in BasicTransformerBlock_forward
return checkpoint(self._forward, x, context)
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 235, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 96, in forward
outputs = run_function(*args)
File "/home/shieri/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 262, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "/home/shieri/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shieri/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 293, in xformers_attention_forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)
File "/home/shieri/stable-diffusion-webui/repositories/xformers/xformers/ops/fmha/init.py", line 197, in memory_efficient_attention
return _memory_efficient_attention(
File "/home/shieri/stable-diffusion-webui/repositories/xformers/xformers/ops/fmha/init.py", line 293, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/home/shieri/stable-diffusion-webui/repositories/xformers/xformers/ops/fmha/init.py", line 309, in _memory_efficient_attention_forward
op = _dispatch_fw(inp)
File "/home/shieri/stable-diffusion-webui/repositories/xformers/xformers/ops/fmha/dispatch.py", line 95, in _dispatch_fw
return _run_priority_list(
File "/home/shieri/stable-diffusion-webui/repositories/xformers/xformers/ops/fmha/dispatch.py", line 70, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for
memory_efficient_attention_forward
with inputs:query : shape=(1, 5568, 8, 40) (torch.float16)
key : shape=(1, 5568, 8, 40) (torch.float16)
value : shape=(1, 5568, 8, 40) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
cutlassF
is not supported because:xFormers wasn't build with CUDA support
flshattF
is not supported because:xFormers wasn't build with CUDA support
tritonflashattF
is not supported because:xFormers wasn't build with CUDA support
requires A100 GPU
smallkF
is not supported because:xFormers wasn't build with CUDA support
dtype=torch.float16 (supported: {torch.float32})
max(query.shape[-1] != value.shape[-1]) > 32
unsupported embed per head: 40
The text was updated successfully, but these errors were encountered: