We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It would be good to have ability to get all outputs, when tensorflow serving is used in grpc mode. Currently only rest can return all outputs. Grpc: https://github.com/SeldonIO/seldon-core/blob/master/integrations/tfserving/TfServingProxy.py#L76. Maybe make model_output optional, and return whole dict with outputs. When used via REST all outputs are returned now.
The text was updated successfully, but these errors were encountered:
See responses in #1964
Sorry, something went wrong.
I'll close this if using the Tensorflow protocol directly is the best option. Please reopen if that is an issue.
No branches or pull requests
It would be good to have ability to get all outputs, when tensorflow serving is used in grpc mode. Currently only rest can return all outputs. Grpc: https://github.com/SeldonIO/seldon-core/blob/master/integrations/tfserving/TfServingProxy.py#L76. Maybe make model_output optional, and return whole dict with outputs. When used via REST all outputs are returned now.
The text was updated successfully, but these errors were encountered: