Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

this works with AMD? #2

Open
KillyTheNetTerminal opened this issue Jun 6, 2024 · 13 comments
Open

this works with AMD? #2

KillyTheNetTerminal opened this issue Jun 6, 2024 · 13 comments

Comments

@KillyTheNetTerminal
Copy link

No description provided.

@ArkhamInsanity
Copy link

Currently using it on my ubuntu pc with AMD 6800XT graphic card, about 45 seconds for 45 seconds songs with defaults parameters (100 steps)

@KillyTheNetTerminal
Copy link
Author

oh, cool, but using rocm? I use directml on windows (Rx580) how did you install this? i get errors and my comfy installation corrupted

@ArkhamInsanity
Copy link

oh, cool, but using rocm? I use directml on windows (Rx580) how did you install this? i get errors and my comfy installation corrupted

Yes with rocm, never tried to use directml on windows myself, sorry.

@ghost
Copy link

ghost commented Jun 17, 2024

hmm, requires flash_attn. How did you get it working with rocm?

TypeError: expected string or bytes-like object
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for flash_attn
  Running setup.py clean for flash_attn
Failed to build flash_attn

@ArkhamInsanity
Copy link

hmm, requires flash_attn which requires nvidia no? How did you get it working with rocm?

TypeError: expected string or bytes-like object
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for flash_attn
  Running setup.py clean for flash_attn
Failed to build flash_attn

I don't know how to help you, I used comfyui manager to install like everything I install in comfyui and it worked right away

@ghost
Copy link

ghost commented Jun 17, 2024

I think it is since StableAudioSampler was updated today/yesterday? I installed it fine from the comfy manager last week. But today, no go. Anyway, I found there is a ROCm flash_attn from our friends @ AMD.com

https://rocm.docs.amd.com/en/latest/how-to/llm-fine-tuning-optimization/model-acceleration-libraries.html

This installs Flash Attention 2 ROCm and the node is available. In process of testing if it actually works but I'm confident.
:-)

NO!! installed but complained I needed a MI200 or MI300 GPU when I ran a workflow with StableAudioSampler. GaAAhHhH!!

@KillyTheNetTerminal
Copy link
Author

I'm a little confused about all this haha... flash attention is something that is not available in Directml? Rocm, what is it about? I installed a comfy Zluda version and it also disables that option

@ArkhamInsanity
Copy link

I think it is since StableAudioSampler was updated today/yesterday? I installed it fine from the comfy manager last week. But today, no go. Anyway, I found there is a ROCm flash_attn from our friends @ AMD.com

https://rocm.docs.amd.com/en/latest/how-to/llm-fine-tuning-optimization/model-acceleration-libraries.html

This installs Flash Attention 2 ROCm and the node is available. In process of testing if it actually works but I'm confident. :-)

I have to update^^ Tomorrow I may have more time to try it

I'm a little confused about all this haha... flash attention is something that is not available in Directml? Rocm, what is it about? I installed a comfy Zluda version and it also disables that option

I am not really good with the tech, but Rocm works on linux and soon on windows.

@ghost
Copy link

ghost commented Jun 17, 2024

Well I tried,
option one installed but told me it only works on MI 200/300 GPU's (it tells me that when I try to run a workflow) ,
the other option 'Triton' wont build, it fails getting the pytorch version.

No issues with ROCm, pytorch and onnxruntime work fine using ROCm 6.0.2 (stable)

As flash_attn errors out of building the wheel at the pytorch version check I am assuming it wants an older ROCm like 5.7 maybe? as this changes the version number of pytorch (adds rocm-6.0 to the version number)

Arch Linux
Pythons 3.10 through 3.12 tested
Pytorch ROCm 6.0.2 from official site (https://pytorch.org/get-started/locally/)
Radeon RX 6900XT (gfx1030)
Clean venv and just install pytorch then try to install flash_attn
I tried just installing it from comfyui manager and there is no difference in the errors. it wants this flash_attn and I can't get it to work.

@ghost
Copy link

ghost commented Jun 17, 2024

I'm a little confused about all this haha... flash attention is something that is not available in Directml? Rocm, what is it about? I installed a comfy Zluda version and it also disables that option.

seems on linux one of the dependencies for one of the requirements for StableAudioSampler is flash_attn, I have never come accross it before but there it is. Don't know what changed as I did install StableAudioSampler only a week ago without issues somwhere in between this flash_attn appeared as a dependency.

windows uses DirectML/Zluda. On Linux, for AMD, we use ROCm/OpenCL.

@ghost
Copy link

ghost commented Jun 17, 2024

I'm a little confused about all this haha... flash attention is something that is not available in Directml? Rocm, what is it about? I installed a comfy Zluda version and it also disables that option

oops, sorry, then Iv hijacked your thread, just assumed linux, , err yes, As far as I am aware this works for AMD , just not today on linux LOL

@ghost
Copy link

ghost commented Jun 17, 2024

err, hmm, ok, well, remove the flash_attn line from your requirements.txt, then do
pip install -r requirements.txt everything goes fine and dandy without flash_attn, ComfyUI-StableAudioSampler workflow working fine and outputting music. Solved, pheeewww, tested on python 3.10

@ArkhamInsanity
Copy link

You should definitely post your own issue topic if you want the developper to take care of it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants