You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation (#1638) and the test script perform single shot inference then exit, a production version must be capable of loading a model and completing multiple inference calls before exiting.
The first iteration will parse the input defined in #1700 and called evaluate on the model, there not be any explicit control commands in the first instance but by structuring the code in this way it will be easier to add commands in the future. The app will exit when the input pipe is closed the same as autodetect.
I see the command processor code living in the bin/pytorch_inference rather than adding it to an existing library or creating a new library.
The text was updated successfully, but these errors were encountered:
The current implementation (#1638) and the test script perform single shot inference then exit, a production version must be capable of loading a model and completing multiple inference calls before exiting.
The first iteration will parse the input defined in #1700 and called evaluate on the model, there not be any explicit control commands in the first instance but by structuring the code in this way it will be easier to add commands in the future. The app will exit when the input pipe is closed the same as
autodetect
.I see the command processor code living in the
bin/pytorch_inference
rather than adding it to an existing library or creating a new library.The text was updated successfully, but these errors were encountered: