Replies: 3 comments 1 reply
-
Could you have a look? I think it should be very easy to support it. (We have supported it in sherpa-onnx. All we need to do is to convert the whisper turbo to onnx.) |
Beta Was this translation helpful? Give feedback.
-
@janekpi https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/whisper/convert_checkpoint.py#L38 You could follow here. Nothing need to change except model name. We would also update it into sherpa rencently. |
Beta Was this translation helpful? Give feedback.
-
Were you able to remove the 30 second limit ? |
Beta Was this translation helpful? Give feedback.
-
Hi guys!
First of all thanks for such a great tools to work with whispers. I just wanted to ask if are you going to integrate whisper turbo into tensorrt and make it compatible with triton server?
Beta Was this translation helpful? Give feedback.
All reactions