-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ROCm isnt supported #67
Comments
I think Pytorch supports ROCm, so it should be supported out of the box basically, but I didn't test it so I couldn't tell.
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2
|
Doesn't seem like it, complains about cuda even if I've used the rocm one. Seems to be hardcoded and there isnt a rocm version for the device type. |
Pytorch does support rocm but this just complains about cuda libraries |
RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version |
AFAIK, Pytorch is using cuda device name even for ROCm devices, so this is not a problem. |
It is indeed supported by rocm, like I said it's not pytorch that's complaining about it. |
I've tried a conda env that didn't work, tried doing it globally also didn't work. My GPU Chip is supported because I can use it on S.D. Next (Stable diffusion webui) just fine |
Could you attempt to add ROCm to the device list and I can test it out for you or? |
Weird! cause it should be working with the same device name of |
I don’t think it’s an issue with PyTorch itself, it seems to me that the cuda device could be hardcoding cuda or something similar, as PyTorch with the same version on rocm works completely fine for me on other projects. The full error I was getting was mainly what I’ve sent you, however I’ll go ahead and get you it. Perhaps take a look at the code and see if cuda is hardcoded. |
Yes |
I believe I've found the issue, "EXPORT HSA_OVERRIDE_GFX_VERSION=10.3.0" seems to of fixed the issue for me, so for future cases I'd redirect people to this fix and see if that works for them. Unfortunately it's just AMD making it hard to identify their own hardware. Sorry for the troubles <3 |
I would however like a feature that embeds the subtitles into the video, allowing us to download it afterwards (hopefully to get around the 200mb limit) Or even just making a flag to disable it. Unless there is one? |
Great to hear that you found the source of the issue and thanks for posting the solution, It will certainly help other AMD users facing similar issues. I will add a reference to this issue in the README file as well. |
You mean to merge the video with the subtitles directly ? without exporting just the srt file ? |
Yeah the 200mb limit is imposed by the video component of streamlit, I tried to look for alternatives that support subtitles but unfortunaltey I couldn't find any! I will see if I there is any other solution. |
Sounds great! Also is it possible to have multiple files in a queue? |
Yes using the CLI, you can provide a text file containing the absolute path of the files, it will run them one by one, |
A little update on this, seems only "whisper" by OpenAI and "whisper timestamped" detects rocm (cuda:0) the rest do not |
I've re-checked the device attribute of all models, I have fixed WhsiperX and hopefully it should be working now, please give it a try! |
For WhisperX, I'm getting this error:
However it does say cuda:0 (which usually indicates that the gpu is detected) so something is wrong here. |
Oh Yeah, WhisperX is using Faster-whisper as its backend! so I doubt if it will work in your case! |
Ah that’s very unfortunate, thank you for trying though!
And yes, I do mean this. |
Ok, I've added this feature. |
Very sorry I was very busy. I'm now getting this error:
|
Could we also perhaps look into this? https://huggingface.co/facebook/nllb-200-3.3B |
Very sorry something came up again, I seemed to of gotten it to work by using an mp4 this time. When it comes to the 200mb limit, is this just for displaying the video itself? If so can we have an option that if the video is >200mb that it doesn’t embed it and only gives u the merge video with subtitles/download video button? Thanks! |
@MidnightKittenCat, it is already the case I think, if the video exceeds 200mb you just can't view it, but you can transcribe and merge as well. |
Finally got around to trying this for myself, this doesn't seem to be the case.\
"File must be 200.0MB or smaller." when inputting a file over 200mb, I was hopping it can be transcribed/merged without showing the video if >200mb |
Thank you! I had to use |
I am getting |
@insberr, maybe you should downgrade rocm to a lower version as I described in the comment before, |
I also have an RX6800; the following setup worked for me:
And my conda channels are:
|
There is now a fork of CTranslate2 that works with whisperX OpenNMT/CTranslate2#1072 (comment) Can this be included in subsai? @abdeladim-s |
@Snuupy, Good to know that there is finally a fork that works for ROCm. |
Is it possible to add ROCm support for amd gpus?
The text was updated successfully, but these errors were encountered: