Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] [PyTorch] Create command processor for inference app #1701

Closed
davidkyle opened this issue Jan 28, 2021 · 1 comment
Closed

[ML] [PyTorch] Create command processor for inference app #1701

davidkyle opened this issue Jan 28, 2021 · 1 comment

Comments

@davidkyle
Copy link
Member

The current implementation (#1638) and the test script perform single shot inference then exit, a production version must be capable of loading a model and completing multiple inference calls before exiting.

The first iteration will parse the input defined in #1700 and called evaluate on the model, there not be any explicit control commands in the first instance but by structuring the code in this way it will be easier to add commands in the future. The app will exit when the input pipe is closed the same as autodetect.

I see the command processor code living in the bin/pytorch_inference rather than adding it to an existing library or creating a new library.

@davidkyle
Copy link
Member Author

Closed by #1770

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant