-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
please include openvino iGPU support for hardware acceleration #25
Comments
Upstream doesn't support anything but CUDA unfortunately: |
Totally Newb in thus domain, Dunno if it would bé helpful : https://github.com/ggerganov/whisper.cpp/blob/master/models/convert-whisper-to-openvino.py |
That's whisper.cpp, a different project. |
This is more applicable to this project: https://github.com/rhasspy/wyoming-whisper-cpp |
I'm too noob to understand the différence between Whisper/Whisper.cpp except it's a C version. |
For those interested in this thread, I made use of a fork of CTranslate2 to build a Wyoming Faster Whisper for ROCm container. Check it out here if you are interested. I don't have much hardware to test with, so All I have tested is my APU. This performs about 15x faster than CPU faster-whisper for me, and about 5x faster than the Whisper.cpp implementation that can be found here. |
as seen on frigate, openvino support highly accelerate ai computing (equivalent to google coral) even on old/small/cheap architectures (> gen6)
combined to faster-whisper or whisper.cpp could higly fluidify the experience of assistant.
iGPu can be easily passed to docker container as done on frigate.
The text was updated successfully, but these errors were encountered: