Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HF whisper TF Model to quantized TFLIte(not working) #4

Open
nyadla-sys opened this issue Nov 4, 2022 · 0 comments
Open

HF whisper TF Model to quantized TFLIte(not working) #4

nyadla-sys opened this issue Nov 4, 2022 · 0 comments

Comments

@nyadla-sys
Copy link

nyadla-sys commented Nov 4, 2022

@bhadreshpsavani
I was able to convert from Hugging face whisper onnx to tflite(int8) model,however am not sure how to run the inference on this model
Could you please review and let me know if there is anything i am missing in onnx to tflite conversion
##ONNx to int8 model
https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/whisper_to_onnx_tflite_int8.ipynb

##TF to Hybrid TFLIte model
https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/tflite_from_huggingface_whisper.ipynb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant