Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apparent CUDA error at compile time. "Torch not compiled with CUDA enabled" #40

Open
bondo01 opened this issue Apr 6, 2024 · 0 comments

Comments

@bondo01
Copy link

bondo01 commented Apr 6, 2024

on startup of latest version of F-C-SDXL go to last line of console log to see message "Torch not compiled with CUDA enabled"
Shouldn't a cuda enabled version of torch be used with cuda hardware?

Full Console Log

(fooocusControl) PS D:\Fooocus_win64\Fooocus-ControlNet-SDXL> python entry_with_update.py
Already up-to-date
Update succeeded.
Python 3.10.14 | packaged by Anaconda, Inc. | (main, Mar 21 2024, 16:20:14) [MSC v.1916 64 bit (AMD64)]
Fooocus version: 2.1.701
Downloading: "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors" to D:\Fooocus_win64\Fooocus-ControlNet-SDXL\models\loras\sd_xl_offset_example-lora_1.0.safetensors

100%|█████████████████████████████████████████████████████████████████████████████| 47.3M/47.3M [00:00<00:00, 67.1MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth" to D:\Fooocus_win64\Fooocus-ControlNet-SDXL\models\vae_approx\xlvaeapp.pth

100%|███████████████████████████████████████████████████████████████████████████████| 209k/209k [00:00<00:00, 4.70MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt" to D:\Fooocus_win64\Fooocus-ControlNet-SDXL\models\vae_approx\vaeapp_sd15.pth

100%|███████████████████████████████████████████████████████████████████████████████| 209k/209k [00:00<00:00, 4.96MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xl-to-v1_interposer-v3.1.safetensors" to D:\Fooocus_win64\Fooocus-ControlNet-SDXL\models\vae_approx\xl-to-v1_interposer-v3.1.safetensors

100%|█████████████████████████████████████████████████████████████████████████████| 6.25M/6.25M [00:00<00:00, 29.0MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_expansion.bin" to D:\Fooocus_win64\Fooocus-ControlNet-SDXL\models\prompt_expansion\fooocus_expansion\pytorch_model.bin

100%|███████████████████████████████████████████████████████████████████████████████| 335M/335M [00:04<00:00, 85.0MB/s]
C:\Users\micro.conda\envs\fooocusControl\lib\site-packages\gradio_client\documentation.py:103: UserWarning: Could not get documentation group for <class 'gradio.mix.Parallel'>: No known documentation group for module 'gradio.mix'
warnings.warn(f"Could not get documentation group for {cls}: {exc}")
C:\Users\micro.conda\envs\fooocusControl\lib\site-packages\gradio_client\documentation.py:103: UserWarning: Could not get documentation group for <class 'gradio.mix.Series'>: No known documentation group for module 'gradio.mix'
warnings.warn(f"Could not get documentation group for {cls}: {exc}")
Exception in thread Thread-3 (worker):
Traceback (most recent call last):
File "C:\Users\micro.conda\envs\fooocusControl\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\micro.conda\envs\fooocusControl\lib\threading.py", line 953, in run
self._target(*self._args, **self.kwargs)
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\modules\async_worker.py", line 21, in worker
import modules.default_pipeline as pipeline
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\modules\default_pipeline.py", line 1, in
import modules.core as core
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\modules\core.py", line 3, in
from modules.patch import patch_all
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\modules\patch.py", line 4, in
import fcbh.model_base
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\model_base.py", line 2, in
from fcbh.ldm.modules.diffusionmodules.openaimodel import UNetModel
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\ldm\modules\diffusionmodules\openaimodel.py", line 16, in
from ..attention import SpatialTransformer
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\ldm\modules\attention.py", line 10, in
from .sub_quadratic_attention import efficient_dot_product_attention
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\ldm\modules\sub_quadratic_attention.py", line 27, in
from fcbh import model_management
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\model_management.py", line 114, in
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\model_management.py", line 83, in get_torch_device
return torch.device(torch.cuda.current_device())
File "C:\Users\micro.conda\envs\fooocusControl\lib\site-packages\torch\cuda_init
.py", line 787, in current_device
lazy_init()
File "C:\Users\micro.conda\envs\fooocusControl\lib\site-packages\torch\cuda_init
.py", line 293, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant