Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hello, how to export the projection onnx? #8

Closed
lucasjinreal opened this issue Sep 27, 2022 · 10 comments
Closed

hello, how to export the projection onnx? #8

lucasjinreal opened this issue Sep 27, 2022 · 10 comments

Comments

@lucasjinreal
Copy link

From the export, I didn't find how to export the projection onnx as well?

@EmreOzkose
Copy link
Collaborator

PR is here, but not been merged yet since I didn't have time to do some changes (about testing and inference scripts). However, you can use the export script for now.

@lucasjinreal
Copy link
Author

@EmreOzkose Hi, I only found these onnx were exported, didn't find encoder_proj.onnx or decoder_proj.onnx.

image

which needed by your onnx c++ inference code.

image

I have 2 questions:

  1. the export didn't aligned with your inference code, how to resolve it?
  2. you have a all_in_one model, in my opinion, this is useless, since we using encoder and decoder to inference, how didi u using it?

@lucasjinreal
Copy link
Author

@csukuangfj Hello, can u help me out on this issue? I can not get your project run.

@csukuangfj
Copy link
Collaborator

What have you done and what are the error messages?

@EmreOzkose
Copy link
Collaborator

Hi @jinfagang , I am updating PR now. You can use export script

  1. the export didn't aligned with your inference code, how to resolve it?

Export script is updated.

  1. you have a all_in_one model, in my opinion, this is useless, since we using encoder and decoder to inference, how didi u using it?

Initially, the aim was to combine all parts into one .onnx like model.pt. There is a branch which uses only all-in-one.onnx to decode. It is written in Python. This repo uses Onnxruntime without dependency on Libtorch and I couldn't extract each model (encoder, decoder, etc..) internally. Hence models are given separately for now.

@lucasjinreal
Copy link
Author

@EmreOzkose thanks. now i can get proj model as well.

the all in on infefrence, from I can see: https://github.com/EmreOzkose/sherpa/blob/887ddd0924cf5c4216a8671c39b04e8e8371356d/sherpa/bin/pruned_transducer_statelessX/offline_asr.py#L298

this still using model's decoder when greedysearch. This is not convenient on onnxruntime.

@lucasjinreal
Copy link
Author

Oh, I get it. Will this also port to c++ as well?

@EmreOzkose
Copy link
Collaborator

This repo contains the first working version, but I have to do refactoring (adding OfflineASR, OfflineRecognizer, etc..). I am planning to do it in a few days.

@csukuangfj
Copy link
Collaborator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants