-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: AMD GPU support with oneDNN AMD support #1072
Comments
Hello, Currently we only use oneDNN for specific operators such as matrix multiplications and convolutions, but a full MT models contains many other operators (softmax, layer norm, gather, concat, etc.). Even though some of them are available in oneDNN, it would require quite some work to specialize all operations for AMD GPUs. At this time I don't plan to work on this feature, but it would indeed be a nice one to have! |
I wanted to try faster whisper on a Intel A770 dGPU 16GB. A complete use of oneDNN could also enable that hardware support. |
Migrating a transcription component to faster-whisper, and using an AMD GPU, I'd also appreciate faster-whisper with ROCm support even more. |
@towel did you manage to get |
any way to run with an amd gpu? |
Any update on this? |
I still don't plan to work on this at this time, and as far as I know no one else is working on this. I expect it would be quite some work to have a complete ROCm support. |
I had a go at converting the existing cuda stuff to rocm a few months ago but could never get it to build, not surprising as I have zero C++ or cmakelists skills. curand, cublas, cudnn, cuda, cub appear to map to hip with minor adjustments, but I could never get the cmakelists to include thrust (the version supplied by rocm) and it always halted compiling due to producing too many errors. |
I started trying to port CTranslate2 to ROCm last weekend and decided to share my (non-working) results here. The code is available in the Basically, hipify was able to convert most of the code automatically. I added a new CMake config option to enable compiling with ROCm, and so far calling the HIP compiler works, however it breaks the other options and requires a CMake version new enough to have HIP language support. Current issues are some CUDA library dependencies I did not look at yet, and the use of bfloat16 data type. While latest ROCm has a (according to this GH issue -> ROCm/ROCm#2534) drop-in replacement for the CUDA bf16, it currently has some issues in missing operators. Therefore, I'm trying to completely disable bf16 for now, but without luck so far. This work has right now just the goal of making it work, and not integrating HIP/ROCm into the (CMake) infrastructure. In case someone wants to have a look at the code and help porting, feel free to look at my fork. Unfortunately, I don't expect to have much time in the near future for this project. |
This is awesome dude. Wish I had programming experience to help with this, but alas I don't. I've been looking for ways to enable gpu acceleration for amd gpus using ctranslate2...Let me know if I can help in any way, whether it be by testing or what have you. |
Have you gotten it to work at all yet? |
“I started trying to port CTranslate2 to ROCm last weekend and decided to share my (non-working) results here” I believe that should answer your question. |
Thanks for sharing, much interested in the ripple effects this port may have for others projects. There's now ROCM 6.0 available which I believe addresses specifically what you're referencing. FYI: https://repo.radeon.com/amdgpu/6.0/ubuntu/dists/jammy/ I've tried all kinds of dumb uninformed stuff trying to get libretranslate to work with rocm to no avail. It depens on too recent cuda to be tricked by rocm. Latest pytorch+rocm5.7 also did not work out well. https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html |
So, would it make sense to create an experimental version of ctranslate2 using a more recnet oneDNN which does have AMD GPU support ? from https://github.com/oneapi-src/oneDNN https://github.com/oneapi-src/oneDNN?tab=readme-ov-file#system-requirements https://github.com/oneapi-src/oneDNN/blob/main/src/gpu/amd/README.md |
As said in the Feb 2023 comment "Even though some of them are available in oneDNN, it would require quite some work to specialize all operations for AMD GPUs." Since no one is making those changes, it won't move on. |
I am not a developer but I work at AMD and handle developer relationships. We would like to assist with the effort to enable CTranslate2 for AMD dGPUs and iGPU. We will have engineers investigate, but we may also be able to provide hardware to the lead contributors of this effort. Please contact me via michael dot katz at amd dot com if this would help. |
is there any update on this? |
I suspect Lisa and Jensen have a deal that AMD only gets the crumbs from the AI pie. |
Does that mean Intel ARC Gpus can also be supported? |
Personally, on my hardware, even with GPU acceleration, whisper.cpp is way slower than faster-whisper using the same model and CPU, and the transcription time is also very unpredictable. |
Try whisperx if you are able to use faster-whisper, it does distraction and
has a better VAD...
…On Fri, Aug 2, 2024, 2:10 PM Aleksandr Oleinikov ***@***.***> wrote:
For whisper.cpp at least, it now supports vulkan as a gpu backend
<ggerganov/whisper.cpp#2302>. With home assistant
this is working well for me through
https://github.com/ser/wyoming-whisper-api-client
Personally, on my hardware, even with GPU acceleration, whisper.cpp is way
slower than faster-whisper using the same model and CPU, and the
transcription time is also very unpredictable.
—
Reply to this email directly, view it on GitHub
<#1072 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGM4B3STLKCH2XVHLRM6NCLZPMBDVAVCNFSM6AAAAAAUWK5BQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRUGUYDAMZYGU>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Diarisation
…On Fri, Aug 2, 2024, 3:21 PM Johl Brown ***@***.***> wrote:
Try whisperx if you are able to use faster-whisper, it does distraction
and has a better VAD...
On Fri, Aug 2, 2024, 2:10 PM Aleksandr Oleinikov ***@***.***>
wrote:
> For whisper.cpp at least, it now supports vulkan as a gpu backend
> <ggerganov/whisper.cpp#2302>. With home
> assistant this is working well for me through
> https://github.com/ser/wyoming-whisper-api-client
>
> Personally, on my hardware, even with GPU acceleration, whisper.cpp is
> way slower than faster-whisper using the same model and CPU, and the
> transcription time is also very unpredictable.
>
> —
> Reply to this email directly, view it on GitHub
> <#1072 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AGM4B3STLKCH2XVHLRM6NCLZPMBDVAVCNFSM6AAAAAAUWK5BQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRUGUYDAMZYGU>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
|
I don't believe there is a way to hook it up to the Wyoming protocol, which is my sole usecase for it. |
Alright with rocm 6.2 supporting my gpu now I was curious to do a quick test. Using the medium model and this test file, here's what I'm seeing:
This is with an i5-10400F and RX5700 using code adapted from the readme: model_size = "medium"
model = WhisperModel(model_size, device="cpu", compute_type="int8", cpu_threads=12)
segments, info = model.transcribe("tests/data/physicsworks.wav", beam_size=5, language="en") Edit: Lowered |
can a first be to test this against Zluda? I look forward to being able to run ctranslate2 with GPU acceleration without requiring to buy an nvidia |
I ported CTranslate2 over to ROCm. My fork is here: https://github.com/arlo-phoenix/CTranslate2-rocm Status Tracker
Instead of using oneDNN I just hipified the repo and extracted HIP to CUDA function mapping to create a preprocessor solution similar to projects like llama.cpp. Besides the listed stuff it is feature complete and works very well. I included some benchmark scripts with the file from #1072 (comment) (@genehand would be nice if you could try this and add the numbers to a table!). On my RX6800 I'm getting Btw should we split issues up? This is two combined into one. I personally believe porting all operators to oneDNN is far too much effort and might not even lead to good performance. This repo hipified quite well, I was able to use simple defines from HIP to CUDA functions for the majority of the project. I only had to rewrite the conv1d operator from scratch since hipDNN isn't maintained anymore. |
@arlo-phoenix Can you add the "issues" tab on your github so we can communicate that way? I'm possibly interested in incorporating this into my projects. |
Thanks @arlo-phoenix I've successfully installed it. Using your benchmark I get 8-9 seconds on faster_whisper with an RX 7800 XT. I tried testing with whisperx but cannot get it to work, I get OSError: libtorch_cuda.so no such file. |
Did you install pytorch with CUDA?
eg.
pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1
--index-url https://download.pytorch.org/whl/cu118
https://pytorch.org/get-started/locally/
…On Thu, Aug 8, 2024 at 7:08 AM yeetmanpat ***@***.***> wrote:
Thanks @arlo-phoenix <https://github.com/arlo-phoenix> I've successfully
installed it. Using your benchmark I get 8-9 seconds on faster_whisper with
an RX 7800 XT. I tried testing with whisperx but cannot get it to work, I
get OSError: libtorch_cuda.so no such file.
—
Reply to this email directly, view it on GitHub
<#1072 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGM4B3WC77TQ3NWXXICG4NTZQKEFLAVCNFSM6AAAAAAUWK5BQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZUGM2TAMJUGQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Sorry just read you're using RX 7800, you will need pytorch-rocm which is
only available on linux
pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1
--index-url https://download.pytorch.org/whl/rocm6.0
…On Thu, Aug 8, 2024 at 7:08 AM yeetmanpat ***@***.***> wrote:
Thanks @arlo-phoenix <https://github.com/arlo-phoenix> I've successfully
installed it. Using your benchmark I get 8-9 seconds on faster_whisper with
an RX 7800 XT. I tried testing with whisperx but cannot get it to work, I
get OSError: libtorch_cuda.so no such file.
—
Reply to this email directly, view it on GitHub
<#1072 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGM4B3WC77TQ3NWXXICG4NTZQKEFLAVCNFSM6AAAAAAUWK5BQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZUGM2TAMJUGQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Nice! I’m traveling now but will definitely try that out after the weekend. 👏 |
Any idea if this will work on WSL on Windows? |
I updated my fork to work with ROCm 6.2, install instructions are still here (there's some regression? / not well documented change for MIOpen GetWorkspaceSize which makes it return 0 causing the following functions to trigger fallbacks which caused a significant slowdown. I don't think this is the correct solution, but just reusing the last workspace size worked in my short time testing.)
@yeetmanpat as @chboishabba said that's a typical torch error. I added install instructions for whisperX that worked for me.
@genehand Nice. But just as a warning, I realized it might not work after all since this relies on MIOpen which depends on composable_kernel which afaik isn't made / tuned for RDNA 1. It's still worth a try though.
depends on your GPU. ROCm WSL is officially only supported on RX7800W+ (source). This doesn't have as many dependencies as pytorch, but it's still up there. If you can't run pytorch you likely can't run this. |
Amazing to have ROCm support for ct2, if anyone is able to assist regarding
older cards supporting newer ROCm versions, this would massively increase
the number of AMD cards able to run ML apps.
I can get whisper to run on gpu using below docker but ran into issues with
ct2, tested a few months ago...
I'm running on RX580 which was highly sought after during GPU shortages of
COVID... I believe latest supported ROCm for gfx803 and gfx900 is 5.4.2
https://github.com/jrcichra/rocm-pytorch-gfx803
https://wiki.archlinux.org/title/AMD_Radeon_Instinct_MI25#ROCm
…On Sat, Aug 10, 2024 at 11:10 PM Arlo Phoenix ***@***.***> wrote:
I updated my fork to work with ROCm 6.2, install instructions are still
here
<https://github.com/arlo-phoenix/CTranslate2-rocm/blob/rocm/README_ROCM.md>
(there's some regression? / not well documented change for MIOpen
GetWorkspaceSize which makes it return 0 causing the following functions to
trigger fallbacks which caused a significant slowdown. I don't think this
is the correct solution, but just reusing the last workspace size worked in
my short time testing.)
I also enabled the use of the AsyncAllocator in CT2 (had a CUDA_VERSION
guard that I missed) which improved faster-whisper consistency and speed
(whisperX is almost 6% faster now because of this).
------------------------------
@yeetmanpat <https://github.com/yeetmanpat> as @chboishabba
<https://github.com/chboishabba> said that's a typical torch error. I
added install instructions
<https://github.com/arlo-phoenix/CTranslate2-rocm/blob/rocm/README_ROCM.md#whisperX>
for whisperX that worked for me.
------------------------------
Nice! I’m traveling now but will definitely try that out after the
weekend. 👏
@genehand <https://github.com/genehand> Nice. But just as a warning, I
realized it might not work after all since this relies on MIOpen which
depends on composable_kernel which afaik isn't made / tuned for RDNA 1.
It's still worth a try though.
—
Reply to this email directly, view it on GitHub
<#1072 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGM4B3UOYFGSKAY6H4IIK63ZQYGLXAVCNFSM6AAAAAAUWK5BQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOBRGY3TKOBVGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Updated the table with faster-whisper results (includes loading the model) 😄 So far I'm not able to run whisperx, after messing with RuntimeError: HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions. |
This is a rocm version error, check your drivers and that you have
rocm-pytorch eg from my link above
I get HSA override error because rx580 isn't supported by rocm6 so I don't
think it shows as a device.
You can test it's a torch error with torch.cuda.is_enabled() will show true
even with AMD cards if rocm is working
…On Wed, Aug 14, 2024, 3:37 AM Gene Hand ***@***.***> wrote:
I realized it might not work after all since this relies on MIOpen which
depends on composable_kernel which afaik isn't made / tuned for RDNA 1.
It's still worth a try though.
Updated the table with faster-whisper results (includes loading the model)
😄 So far I'm not able to run whisperx, after messing with batch_size and
HSA_OVERRIDE_GFX_VERSION I'm still running into what sounds like what you
mentioned:
RuntimeError: HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
—
Reply to this email directly, view it on GitHub
<#1072 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGM4B3WVKFR3KKTKV5YLKATZRI77HAVCNFSM6AAAAAAUWK5BQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOBWG43TSOJTGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Sorry is_available() I think
…On Wed, Aug 14, 2024, 11:36 AM Johl Brown ***@***.***> wrote:
This is a rocm version error, check your drivers and that you have
rocm-pytorch eg from my link above
I get HSA override error because rx580 isn't supported by rocm6 so I don't
think it shows as a device.
You can test it's a torch error with torch.cuda.is_enabled() will show
true even with AMD cards if rocm is working
On Wed, Aug 14, 2024, 3:37 AM Gene Hand ***@***.***> wrote:
> I realized it might not work after all since this relies on MIOpen which
> depends on composable_kernel which afaik isn't made / tuned for RDNA 1.
> It's still worth a try though.
>
> Updated the table with faster-whisper results (includes loading the
> model) 😄 So far I'm not able to run whisperx, after messing with
> batch_size and HSA_OVERRIDE_GFX_VERSION I'm still running into what
> sounds like what you mentioned:
>
> RuntimeError: HIP error: invalid device function
> HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
> For debugging consider passing AMD_SERIALIZE_KERNEL=3
> Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
>
> —
> Reply to this email directly, view it on GitHub
> <#1072 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AGM4B3WVKFR3KKTKV5YLKATZRI77HAVCNFSM6AAAAAAUWK5BQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOBWG43TSOJTGQ>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
Probably not possible but if MIOpen were to removed, would it compile on windows? Or do all ctranslate2 models require it? |
For those interested in this thread, I made use of @arlo-phoenix 's fork to build a Wyoming Faster Whisper for ROCm container. Check it out here if you are interested. I don't have much hardware to test with, so All I have tested is my APU. |
Very cool I will be testing this on RX580. Cards of that age strike a good
balance between power/compute/cost for many entry level users. Not sure if
I already mentioned but there is a patch by I think xuhuisheng on GitHub
for rocm on gfx803 and a few docker images floating around with the
patches...
…On Sun, Aug 18, 2024, 5:43 AM Dominic Lopriore ***@***.***> wrote:
For those interested in this thread, I made use of @arlo-phoenix
<https://github.com/arlo-phoenix> 's fork to build a Wyoming Faster
Whisper for ROCm container.
Check it out here
<https://github.com/Donkey545/wyoming-faster-whisper-rocm> if you are
interested. I don't have much hardware to test with, so All I have tested
is my APU.
—
Reply to this email directly, view it on GitHub
<#1072 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGM4B3X5LKMAXET3FRUEASTZR6RV7AVCNFSM6AAAAAAUWK5BQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJUHE2TOMJQHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
just tried this a bit ago, ran into a two issues, the read-me points to one issue, which is about building the wheel, however before getting to that point I got an error with the intel runtime file libiomp5 not being found even after installing the runtime, adding -DOPENMP_RUNTIME=NONE to the cmake args fixed it. I am on linux 24.04 with a 7900xtx python 3.10 and rocm 6.2 full command |
Thought some of you might find this project interesting. It's a ROCm builder with GPU specific patches and extended GPU support. https://github.com/lamikr/rocm_sdk_builder It really speeds up inference and I haven't experienced any system hangs. It's currently on ROCm 6.1.2. |
thanks for sharing, having a Radeon VII i'm at a loss with how the support is put on EOL while it never really had much use as a CL accellerator. Meanwhile it was basically the spearhead of AMD for GPU accelerator cards. |
@moyutegong Are you able to provide a link to the wheel you downloaded or a the commands you used by any chance? |
I first executed the commands according to https://github.com/arlo-phoenix/CTranslate2-rocm/blob/rocm/README_ROCM.md. When I got to the |
I successfully built CTranslate2 for python 3.10 using arlo-phoneix's repo. Here's the Dockerfile I used to build it. (final image size around 58.8GB) You may have to change the I had to change the include and libraries paths in the The wheel is in |
Thank you for your method; it worked for me. My graphics card is the 7900xtx, and my system is Ubuntu 20.04 based on WSL2. If anyone wants to use CTranslate2-ROCM, this method can be used. |
Successfully built CTranslate2 for python 3.12.7 using arlo-phoneix's repo with ROCm 6.3.0.(7900xtx) Thanks to arlo-phoneix! |
Hi, CTranslate2 uses oneDNN. oneDNN latest versions has support for AMD GPU. It require Intel oneAPI DPC++. The same approach can potentially enable NVIDIA GPU support too.
It would help running the MT models on AMD GPUs. With Rocm, this would be a full opensource way to run MT models in GPUs.
Thanks
The text was updated successfully, but these errors were encountered: