Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ROCm isnt supported #67

Open
MidnightKittenCat opened this issue Aug 25, 2023 · 38 comments
Open

ROCm isnt supported #67

MidnightKittenCat opened this issue Aug 25, 2023 · 38 comments

Comments

@MidnightKittenCat
Copy link

Is it possible to add ROCm support for amd gpus?

@absadiki
Copy link
Owner

I think Pytorch supports ROCm, so it should be supported out of the box basically, but I didn't test it so I couldn't tell.
You can give it a try, and let me know how it goes :

  1. create a new virtual environment.
  2. install Pytorch ROCm compatible
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2
  1. Then install subsai (as described in the README)

@MidnightKittenCat
Copy link
Author

Doesn't seem like it, complains about cuda even if I've used the rocm one.

Seems to be hardcoded and there isnt a rocm version for the device type.

@MidnightKittenCat
Copy link
Author

Pytorch does support rocm but this just complains about cuda libraries

@MidnightKittenCat
Copy link
Author

RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version

@absadiki
Copy link
Owner

Doesn't seem like it, complains about cuda even if I've used the rocm one.

Seems to be hardcoded and there isnt a rocm version for the device type.

AFAIK, Pytorch is using cuda device name even for ROCm devices, so this is not a problem.

@absadiki
Copy link
Owner

Pytorch does support rocm but this just complains about cuda libraries

if it complains about cuda libraries, then maybe your GPU chip is not supported by Pytorch.
Please make sure first that your GPU is supported from the ROCm docs, then run some tests so you can know if it is working well or not!

@MidnightKittenCat
Copy link
Author

It is indeed supported by rocm, like I said it's not pytorch that's complaining about it.

@MidnightKittenCat
Copy link
Author

I've tried a conda env that didn't work, tried doing it globally also didn't work.

My GPU Chip is supported because I can use it on S.D. Next (Stable diffusion webui) just fine

@MidnightKittenCat
Copy link
Author

Could you attempt to add ROCm to the device list and I can test it out for you or?

@absadiki
Copy link
Owner

Weird! cause it should be working with the same device name of cuda as well!
See this comment here as well.
So you think this is bug in the package ? Do you have any idea where this is coming from ? what model are you trying to use ? what is the full error you are getting ?

@MidnightKittenCat
Copy link
Author

MidnightKittenCat commented Aug 27, 2023

I don’t think it’s an issue with PyTorch itself, it seems to me that the cuda device could be hardcoding cuda or something similar, as PyTorch with the same version on rocm works completely fine for me on other projects.

The full error I was getting was mainly what I’ve sent you, however I’ll go ahead and get you it.

Perhaps take a look at the code and see if cuda is hardcoded.

@absadiki
Copy link
Owner

Yes cuda is hardcoded if torch.cuda.is_available() is True.
To what a value should I change it so that ROCm device should be supported ?

@MidnightKittenCat
Copy link
Author

I believe I've found the issue, "EXPORT HSA_OVERRIDE_GFX_VERSION=10.3.0" seems to of fixed the issue for me, so for future cases I'd redirect people to this fix and see if that works for them.

Unfortunately it's just AMD making it hard to identify their own hardware.

Sorry for the troubles <3

@MidnightKittenCat
Copy link
Author

I would however like a feature that embeds the subtitles into the video, allowing us to download it afterwards (hopefully to get around the 200mb limit)

Or even just making a flag to disable it. Unless there is one?

@absadiki
Copy link
Owner

I believe I've found the issue, "EXPORT HSA_OVERRIDE_GFX_VERSION=10.3.0" seems to of fixed the issue for me, so for future cases I'd redirect people to this fix and see if that works for them.

Unfortunately it's just AMD making it hard to identify their own hardware.

Sorry for the troubles <3

Great to hear that you found the source of the issue and thanks for posting the solution, It will certainly help other AMD users facing similar issues. I will add a reference to this issue in the README file as well.

@absadiki
Copy link
Owner

I would however like a feature that embeds the subtitles into the video, allowing us to download it afterwards (hopefully to get around the 200mb limit)

Or even just making a flag to disable it. Unless there is one?

You mean to merge the video with the subtitles directly ? without exporting just the srt file ?

@absadiki
Copy link
Owner

Yeah the 200mb limit is imposed by the video component of streamlit, I tried to look for alternatives that support subtitles but unfortunaltey I couldn't find any! I will see if I there is any other solution.
you can always use the CLI, it is easy and there is no limit!

@MidnightKittenCat
Copy link
Author

Sounds great! Also is it possible to have multiple files in a queue?

@absadiki
Copy link
Owner

Yes using the CLI, you can provide a text file containing the absolute path of the files, it will run them one by one,

@MidnightKittenCat
Copy link
Author

MidnightKittenCat commented Aug 28, 2023

A little update on this, seems only "whisper" by OpenAI and "whisper timestamped" detects rocm (cuda:0) the rest do not

absadiki added a commit that referenced this issue Aug 28, 2023
@absadiki
Copy link
Owner

I've re-checked the device attribute of all models, I have fixed WhsiperX and hopefully it should be working now, please give it a try!
For fatser-whisper, I am not sure if supports ROCm, because the implementation is in C++ and they are not using Pytorch AFAIK.

@MidnightKittenCat
Copy link
Author

MidnightKittenCat commented Aug 29, 2023

For WhisperX, I'm getting this error:

ValueError: unsupported device cuda:0 Traceback: File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/subsai/webui.py", line 523, in <module> run() File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/subsai/webui.py", line 516, in run webui() File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/subsai/webui.py", line 316, in webui subs = _transcribe(file_path, stt_model_name, model_config) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 194, in wrapper return cached_func(*args, **kwargs) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 223, in __call__ return self._get_or_create_cached_value(args, kwargs) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 248, in _get_or_create_cached_value return self._handle_cache_miss(cache, value_key, func_args, func_kwargs) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 302, in _handle_cache_miss computed_value = self._info.func(*func_args, **func_kwargs) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/subsai/webui.py", line 189, in _transcribe model = subs_ai.create_model(model_name, model_config=model_config) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/subsai/main.py", line 95, in create_model return AVAILABLE_MODELS[model_name]['class'](model_config) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/subsai/models/whisperX_model.py", line 123, in __init__ self.model = whisperx.load_model(self.model_type, File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/whisperx/asr.py", line 50, in load_model model = WhisperModel(whisper_arch, File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/faster_whisper/transcribe.py", line 120, in __init__ self.model = ctranslate2.models.Whisper(

However it does say cuda:0 (which usually indicates that the gpu is detected) so something is wrong here.

@absadiki
Copy link
Owner

Oh Yeah, WhisperX is using Faster-whisper as its backend! so I doubt if it will work in your case!

@MidnightKittenCat
Copy link
Author

Ah that’s very unfortunate, thank you for trying though!

I would however like a feature that embeds the subtitles into the video, allowing us to download it afterwards (hopefully to get around the 200mb limit)
Or even just making a flag to disable it. Unless there is one?

You mean to merge the video with the subtitles directly ? without exporting just the srt file ?

And yes, I do mean this.

absadiki added a commit that referenced this issue Aug 29, 2023
@absadiki
Copy link
Owner

Ah that’s very unfortunate, thank you for trying though!

I would however like a feature that embeds the subtitles into the video, allowing us to download it afterwards (hopefully to get around the 200mb limit)
Or even just making a flag to disable it. Unless there is one?

You mean to merge the video with the subtitles directly ? without exporting just the srt file ?

And yes, I do mean this.

Ok, I've added this feature.
If you are using the webui, you can find in the export section.
Please give it a try and let me know if you find any issues ?

@MidnightKittenCat
Copy link
Author

Ah that’s very unfortunate, thank you for trying though!

I would however like a feature that embeds the subtitles into the video, allowing us to download it afterwards (hopefully to get around the 200mb limit)
Or even just making a flag to disable it. Unless there is one?

You mean to merge the video with the subtitles directly ? without exporting just the srt file ?

And yes, I do mean this.

Ok, I've added this feature. If you are using the webui, you can find in the export section. Please give it a try and let me know if you find any issues ?

Very sorry I was very busy.

I'm now getting this error:

Error: ffmpeg error (see stderr output for detail) Traceback: File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/subsai/webui.py", line 535, in <module> run() File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/subsai/webui.py", line 528, in run webui() File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/subsai/webui.py", line 518, in webui exported_file_path = tools.merge_subs_with_video({subs_lang: subs}, str(media_file.resolve()), exported_video_filename) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/subsai/main.py", line 299, in merge_subs_with_video ffmpeg.run(output_ffmpeg) File "/home/midnight/miniconda3/envs/caption/lib/python3.10/site-packages/ffmpeg/_run.py", line 325, in run raise Error('ffmpeg', out, err)

@MidnightKittenCat
Copy link
Author

image
Hopefully this helps.

@MidnightKittenCat
Copy link
Author

Could we also perhaps look into this? https://huggingface.co/facebook/nllb-200-3.3B

@absadiki
Copy link
Owner

absadiki commented Sep 8, 2023

image Hopefully this helps.

Seems like FFmpeg cannot infer the encodec format of your video file. Are you perhaps applying the function on a an audio file ? if not could you please share a sample so I can test on my end ?

@MidnightKittenCat
Copy link
Author

Very sorry something came up again, I seemed to of gotten it to work by using an mp4 this time.

When it comes to the 200mb limit, is this just for displaying the video itself? If so can we have an option that if the video is >200mb that it doesn’t embed it and only gives u the merge video with subtitles/download video button? Thanks!

@absadiki
Copy link
Owner

@MidnightKittenCat, it is already the case I think, if the video exceeds 200mb you just can't view it, but you can transcribe and merge as well.

@MidnightKittenCat
Copy link
Author

Finally got around to trying this for myself, this doesn't seem to be the case.\

@MidnightKittenCat, it is already the case I think, if the video exceeds 200mb you just can't view it, but you can transcribe and merge as well.

"File must be 200.0MB or smaller." when inputting a file over 200mb, I was hopping it can be transcribed/merged without showing the video if >200mb

@matusnovak
Copy link

I think Pytorch supports ROCm, so it should be supported out of the box basically, but I didn't test it so I couldn't tell. You can give it a try, and let me know how it goes :

1. create a new virtual environment.

2. install Pytorch ROCm compatible
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2
3. Then install `subsai` (as described in the README)

Thank you!

I had to use rocm5.7 instead of rocm5.4.2 and that did the trick for me on Manjaro Linux 6.2.16 with AMD Radeon 6800XT

@insberr
Copy link

insberr commented May 24, 2024

I am getting RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version. I installed pytorch with rocm6.0 and I have a RX 6800. I am also using the m-bain/whisperX model. Any ideas on how to fix this?

@absadiki
Copy link
Owner

I am getting RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version. I installed pytorch with rocm6.0 and I have a RX 6800. I am also using the m-bain/whisperX model. Any ideas on how to fix this?

@insberr, maybe you should downgrade rocm to a lower version as I described in the comment before, rocm5.7 worked for @matusnovak, so maybe try that one.

@aidan-gibson
Copy link

aidan-gibson commented Aug 10, 2024

I also have an RX6800; the following setup worked for me:

micromamba create -n whispergpu python=3.11.9
micromamba activate whispergpu
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1
pip install git+https://github.com/abdeladim-s/subsai

And my conda channels are:

  • conda-forge
  • defaults
  • pypi

@Snuupy
Copy link

Snuupy commented Aug 12, 2024

There is now a fork of CTranslate2 that works with whisperX OpenNMT/CTranslate2#1072 (comment)

Can this be included in subsai? @abdeladim-s

@absadiki
Copy link
Owner

@Snuupy, Good to know that there is finally a fork that works for ROCm.
From the installation instructions it doesn't seem straightforward to include it as a package in subsai's dependencies, but you could give it a try. Just install the fork in the same virtual environment where subsai is installed—it should work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants