Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ver 1.5.1 Error with sd3_medium_incl_clips_t5xxlfp16.safetensors #148

Open
ptits opened this issue Jun 15, 2024 · 3 comments
Open

ver 1.5.1 Error with sd3_medium_incl_clips_t5xxlfp16.safetensors #148

ptits opened this issue Jun 15, 2024 · 3 comments

Comments

@ptits
Copy link

ptits commented Jun 15, 2024

it runs well with sd3_medium_incl_clips.safetensors

but with sd3_medium_incl_clips_t5xxlfp16.safetensors it gives errors:
python entry_with_update.py --share
You have the latest version
/data/ptits/focus/RuinedDS3/RuinedFooocus/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
RuinedFooocus version: 1.51.0
Comfy Backend update check complete.
playsound is relying on another python subprocess. Please use `pip install pygobject` if you want playsound to run more efficiently.
Total VRAM 81051 MB, total RAM 1290100 MB
pytorch version: 2.3.1+cu121
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.1.1+cu121 with CUDA 1201 (you have 2.3.1+cu121)
    Python  3.10.13 (you have 3.10.12)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
xformers version: 0.0.23
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA A100-SXM4-80GB : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Running on local URL:  http://127.0.0.1:7860
Loading base model: sd_xl_base_1.0_0.9vae.safetensors
IMPORTANT: You are using gradio version 3.50.2, however version 4.29.0 is available, please upgrade.
--------
Running on public URL: https://9184c63067f4becfda.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
Base model loaded: sd_xl_base_1.0_0.9vae.safetensors
App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860 or https://9184c63067f4becfda.gradio.live
Loading base model: sd3_medium_incl_clips_t5xxlfp16.safetensors
model_type FLOW
Using pytorch attention in VAE
Using pytorch attention in VAE
Base model loaded: sd3_medium_incl_clips_t5xxlfp16.safetensors
LoRAs loaded: []
Requested to load SD3ClipModel_
Loading 1 new model
Time taken: 1.97 seconds Pipeline process
Exception in thread Thread-5 (worker):
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/modules/async_worker.py", line 341, in worker
    handler(task)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/modules/async_worker.py", line 57, in handler
    process(gen_data)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/modules/async_worker.py", line 240, in process
    imgs = pipeline.process(
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/modules/sdxl_pipeline.py", line 280, in process
    if self.textencode("+", positive_prompt, clip_skip):
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/modules/sdxl_pipeline.py", line 202, in textencode
    self.conditions[id]["cache"] = CLIPTextEncode().encode(
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/repositories/ComfyUI-from-StabilityAI-Official/nodes.py", line 58, in encode
    cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/sd.py", line 142, in encode_from_tokens
    cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/sd3_clip.py", line 124, in encode_token_weights
    t5_out, t5_pooled = self.t5xxl.encode_token_weights(token_weight_pars_t5)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/sd1_clip.py", line 40, in encode_token_weights
    out, pooled = self.encode(to_encode)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/sd1_clip.py", line 201, in encode
    return self(tokens)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/ptits/focus/RuinedDS3/RuinedFooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/sd1_clip.py", line 186, in forward
    z = outputs[1].float()
AttributeError: 'NoneType' object has no attribute 'float'

@ptits
Copy link
Author

ptits commented Jun 16, 2024

I just reproduced it on Windows 10, not only on Ubuntu as above

version 1.5.1, launched with update

========================

Exception in thread Thread-8 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\modules\async_worker.py", line 341, in worker
handler(task)
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\modules\async_worker.py", line 57, in handler
process(gen_data)
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\modules\async_worker.py", line 240, in process
imgs = pipeline.process(
File "f:\RuinedFooocus_win64_1-25-2\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\modules\sdxl_pipeline.py", line 280, in process
if self.textencode("+", positive_prompt, clip_skip):
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\modules\sdxl_pipeline.py", line 202, in textencode
self.conditions[id]["cache"] = CLIPTextEncode().encode(
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\repositories\ComfyUI-from-StabilityAI-Official\nodes.py", line 58, in encode
cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 142, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd3_clip.py", line 124, in encode_token_weights
t5_out, t5_pooled = self.t5xxl.encode_token_weights(token_weight_pars_t5)
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd1_clip.py", line 40, in encode_token_weights
out, pooled = self.encode(to_encode)
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd1_clip.py", line 201, in encode
return self(tokens)
File "f:\RuinedFooocus_win64_1-25-2\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\RuinedFooocus_win64_1-25-2\RuinedFooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd1_clip.py", line 186, in forward
z = outputs[1].float()
AttributeError: 'NoneType' object has no attribute 'float'

@ptits
Copy link
Author

ptits commented Jun 17, 2024

Any news?

@runew0lf
Copy link
Owner

yeah.. doesnt work :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants