-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to Deploy our custom models on seldon-core #104
Comments
Wrapping your tensorflow python model is very easy. There are detailed instructions here To answer your last question if I understand it, you can get your model parameters and weights from any file system that can be attached to kubernetes. Again see this deployment file which uses a volume which contains the saved model weights which the runtime will load on startup. |
After python wrapping got ImportError pointing to microservice.py Traceback (most recent call last): |
This looks like an issue in the wrapping process. In the example in the docs we have
which refers to a class:
Note the class name and file match. Is this the case in your test? |
I have Deploy custom model which has simple request( for eg. [2000] ). which is running fine on local but I am not able to get response when i am running it on seldon. So Is there any specific format to send request ?? |
Please reopen if still an issue. |
* initial kafka commit * Allow headers to define type of payload * refactor to separate pkg * refactor to independent kafka consumer cmd * lint and fix tests * lint and fix tests * push models on subscribe * change to stream name and add compose kafka * Updates for kafka envoy use * initial k8s updates * update notebook * review comments
hi,
I am able to get seldon core up and running on kubernetes cluster and tried deploying your example models , got successful . Now my aim is to deploy our custom model on seldon-core, so what is the procedure/requirements in order to deploy our model plus are there any conventions we have to follow . FYI I have simple model which uses tensorflow and flask framework, so what are the things need to consider before wrapping model.
One more thing I want to know that is it necessary that model we are deploying will have to be a saved model only?
thanks in advance
The text was updated successfully, but these errors were encountered: