You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Are there some plans to enabled predict execution in Seldon using TensorFlow Serving docker image?
I didn't find it mentioned in the examples, documentation and current github issues.
Is it perhaps on the roadmap or do you see it feasible to use TF Serving API on the Model Seldon component?
The text was updated successfully, but these errors were encountered:
Yes, we have work in-progress to allow proxy models that call out to Tensorflow Serving and Nvidia TensorRT Inference Servers. This will allow users to construct inference graphs including Multi-Armed Bandits and other complex components where the models may be wrapped as Seldon containers using any language or call out to other model serving technologies.
Are there some plans to enabled predict execution in Seldon using TensorFlow Serving docker image?
I didn't find it mentioned in the examples, documentation and current github issues.
Is it perhaps on the roadmap or do you see it feasible to use TF Serving API on the Model Seldon component?
The text was updated successfully, but these errors were encountered: