This example demonstrates how you can use kubeflow
end-to-end to train and
serve a Sequence-to-Sequence model on an existing kubernetes cluster. This
tutorial is based upon @hamelsmu's article "How To Create Data Products That
Are Magical Using Sequence-to-Sequence
Models".
There are two primary goals for this tutorial:
- Demonstrate an End-to-End kubeflow example
- Present an End-to-End Sequence-to-Sequence model
By the end of this tutorial, you should learn how to:
- Setup a Kubeflow cluster on an existing Kubernetes deployment
- Spawn up a Jupyter Notebook on the cluster
- Spawn up a shared-persistent storage across the cluster to store large datasets
- Train a Sequence-to-Sequence model using TensorFlow and GPUs on the cluster
- Serve the model using Seldon Core
- Query the model from a simple front-end application
- Setup a Kubeflow cluster
- Training the model. You can train the model using any of the following methods using Jupyter Notebook or using TFJob:
- Serving the model
- Querying the model
- Teardown