You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How can I make it run faster? It takes too long for a 1 minute video.
I am using this CLI command:
whisper_timestamped --accurate video.mp4 --model large-v1 --output_format srt --vad False --device "cuda:0" --output_dir .
My PC:
Windows 11 PRO
GPU: GTX 1080 TI
CPU: i9-9900k
The text was updated successfully, but these errors were encountered:
Something doesn't feel right. When I run with --model large-v3, VRAM shoots to 10GB and it's all very slow. And yet, running the large-v3 model in whisper.cpp, and using vanilla transformers pipeline (in this notebook https://huggingface.co/spaces/hf-audio/whisper-large-v3/blob/main/whisper_notebook.ipynb), both of them only use about 4GB VRAM, and are noticeably faster.
Unfortunately the issue of high VRAM consumption happens in the openai-whisper package itself 😬 openai/whisper#1670 here it's reported from version 20230918, but I tried previous versions (up to 20230124) and experience the same: openai-whisper always hits ~10GB VRAM for large models.
How can I make it run faster? It takes too long for a 1 minute video.
I am using this CLI command:
whisper_timestamped --accurate video.mp4 --model large-v1 --output_format srt --vad False --device "cuda:0" --output_dir .
My PC:
Windows 11 PRO
GPU: GTX 1080 TI
CPU: i9-9900k
The text was updated successfully, but these errors were encountered: