-
Notifications
You must be signed in to change notification settings - Fork 834
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Serialization of pre-processing pipeline for CI/CD #1713
Comments
Hi @jhagege |
@cliveseldon , thanks for your quick answer. I was referring to the following: I find the pattern elegant and I'm wondering how to take it one step further.
I'd like to configure a CI pipeline to package all of those into some kind of an "uber-artifact", per model that is trained, so that it can provide an integrated environment for inference. Thanks for any insights. |
We are not really concentrating on training. It seems that the best approach is to have a solid reproducible preparation of artefacts / trained models (kubeflow, dvc, pachyderm, ...) and then package these into a Docker image that you can deploy with Seldon. Check our latest addition of model metadata: https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/metadata.html that allows one to make connection to the training source of the model. |
Thanks much, I'll review. |
Hi, thanks for the great library.
I noticed in your examples you serialize the preprocessing pipeline.
Does it assume that the pip dependencies of the preprocessing classes need to be the exact same version ?
I'm trying to think how to package the inference workflow inside a single Dockerfile as part of a CI/CD pipeline.
How can I guarantee that I have a self-contained Docker image with the correct exact dependencies and the serialized pre-processing pipeline.
Thanks for any insights.
The text was updated successfully, but these errors were encountered: