Skip to content

Commit

Permalink
Merge pull request #37 from Maximophone/17-wrappers_docs
Browse files Browse the repository at this point in the history
17 wrappers docs
  • Loading branch information
ukclivecox authored Jan 16, 2018
2 parents 6e0f72d + 8f450ba commit 68e3a14
Show file tree
Hide file tree
Showing 2 changed files with 58 additions and 55 deletions.
98 changes: 49 additions & 49 deletions docs/wrappers/h2o.md
Original file line number Diff line number Diff line change
@@ -1,94 +1,94 @@
# Packaging a H2o model for seldon core
# Packaging a H2O model for Seldon Core

In this readme we outline the steps needed to wrap any h2o model using seldon python wrappers into a docker image deployable with seldon core.
The file H2oModel.py is a template to use in order to wrap a h2o model. The only modification required on the user side consists of setting the MODEL_PATH variable at the top of the file.
We also provide a example of usage to train and save a prebuilded h2o model predicting bad loans.
This document outlines the steps needed to wrap any H2O model using Seldon's python wrappers into a docker image ready for deployment with Seldon Core. The process is nearly identical to [wrapping a python model](python.md), so be sure to read the documentation on this process first.
The main differences are:
* The data sent to the model needs to be transformed from numpy arrays into H2O Frames and back;
* The base docker image has to be changed, because H2O needs a Java Virtual Machine installed in the container.

The session "General use" explain how to use the wrapper with any saved h2o model.
You will find below explanations for:
* How to build the H2O base docker image;
* How to wrap your H2O model;
* A detailed example where we train and wrap a bad loans prediction model.

The session "Example of usage" provides a step-by-step guide for training, deploying and wrap the prebuilded h2o model for bad loans predictions as an example.
## Building the H2O base docker image

## General use

It is assumed you have already trained a h2o model and saved it in a file \<your_file_name> using the```h2o.save_model()``` python function.

### Preliminary steps

In order to wrap an H2o model with the python wrappers, you need a python+java docker image avaliable to use as base-image. One way to build a suitable base-image locally is by using the [Dockerfile provided by h2o](https://h2o-release.s3.amazonaws.com/h2o/rel-turing/1/docs-website/h2o-docs/docker.html):
In order to wrap a H2O model with the python wrappers, you need a python+java docker image available to use as base image. One way to build a suitable base image locally is by using the [Dockerfile provided by H2O](https://h2o-release.s3.amazonaws.com/h2o/rel-turing/1/docs-website/h2o-docs/docker.html):

* Make sure you have docker deamon running.
* Download the [Dockerfile provided by h2o](https://github.com/h2oai/h2o-3/blob/master/Dockerfile) in any folder.
* Create the base docker image:
* Download the [Dockerfile provided by H2O](https://github.com/h2oai/h2o-3/blob/master/Dockerfile) in any folder.
* Create the base docker image (we will call it H2OBase:1.0 in this example):

```bash
docker build --force-rm=true -t <your_base_image> .
docker build --force-rm=true -t H2OBase:1.0 .
```

Building the image may take several minutes.

### Wrap:
## Wrapping the model


It is assumed you have already trained a H2O model and saved it in a file (called in what follows SavedModel.h2o). If you use the H2O python API, you can save your model using the```h2o.save_model()``` method.
You can now wrap the model using seldon python wrappers.
You can now wrap the model using Seldon's python wrappers. This is similar to the general python model wrapping process except that you need to specify the H2O base image as an argument when calling the wrapping script.
1. Clone the ```seldon-core-examples``` git repository and copy the files H2oModel.py, requirements.txt and \<your_file_name> in a folder named \<your_model_folder> .
* Open the file H2oModel.py with your favorite editor and set the variable MODEL_PATH to:
We provide a file [H2OModel.py](https://github.com/SeldonIO/seldon-core/blob/master/examples/models/h2o_example/H2OModel.py) as a template for the model entrypoint, which handles loading the H2OModel and transforming the data between numpy and H2O Frames. In what follows we assume you are using this template. The H2O model is loaded in the class constructor and the numpy arrays are turned into H2O Frames when received in the predict method.
Detailed steps:
1. Put the files H2OModel.py, requirements.txt and SavedModel.h2o in a directory created for this purpose.
2. Open the file H2OModel.py with your favorite text editor and set the variable MODEL_PATH to:
```python
MODEL_PATH=/microservice/<your_file_name>
MODEL_PATH=./SavedModel.h2o
```
2. Cd into ```seldon-core-examples``` directory and use the python wrapping scripts:
3. Run the python wrapping scripts, with the additional ````--base-image``` argument:
```bash
./wrap-model-in-minikube <path_to_your_model_folder> H2oModel <your_model_version> <your_docker_repo> --base-image <your_base_image> --force
```
to build your docker image in minikube or
```bash
docker run -v /path/to/your/model/folder:/model seldonio/core-python-wrapper:0.4 /model H2OModel 0.1 myrepo --base-image=H2OBase:1.0
```
"0.1" is the version of the docker image that will be created. "myrepo" is the name of your dockerhub repository.
4. CD into the newly generated "build" directory and run:
```bash
./wrap-model-in-host <path_to_your_model_folder> H2oModel <your_model_version> <your_docker_repo> --base-image <your_base_image> --force
```
to build your docker image in your machine.

This will create a docker image named ```<your_docker_repo>/h2omodel:<your_model_version>``` which is ready for deployment in seldon-core.
```bash
./build_image.sh
./push_image.sh
```
This will build and push to dockerhub a docker image named ```myrepo/h2omodel:0.1``` which is ready for deployment in seldon-core.
## Example of usage
## Example
Here we give an example of usage step by step in which we will train and save a [h2o model for bad loan predictions](https://github.com/h2oai/h2o-tutorials/blob/master/h2o-open-tour-2016/chicago/intro-to-h2o.ipynb), we will create a base image supporting h2o named "seldonio/h2obase:0.1" and we will use seldon wrappers to build dockererized version of the model ready to be deployed with seldon-core.
Here we give a step by step example in which we will train and save a [H2O model for bad loans predictions](https://github.com/h2oai/h2o-tutorials/blob/master/h2o-open-tour-2016/chicago/intro-to-h2o.ipynb), before turning it into a dockerized microservice.
### Preliminary step: build your base image locally
### Preliminary Requirements
1. Have [h2o](http://docs.h2o.ai/h2o/latest-stable/h2o-docs/downloading.html) installed on your machine (h2o is only required to train the example. Seldon-core and seldon wrappers do not require h2o installed on your machine)
1. Make sure you have a docker deamon running
* Download the [Dockerfile provided by h2o](https://github.com/h2oai/h2o-3/blob/master/Dockerfile) in any directory.
* Run ``` docker build --force-rm=true -t none/h2obase:0.0 .``` in the same directory. This will create the base image "none/h2obase:0.0" locally (may take several minutes).
1. Have [H2O](http://docs.h2o.ai/h2o/latest-stable/h2o-docs/downloading.html) installed on your machine (H2O is only required to train the example. Seldon Core and Seldon wrappers do not require H2O installed on your machine)
2. You need to have built the base H2O docker image (see the [dedicated section](#building-the-h2o-base-docker-image) above)
### Train and wrap the model
1. Clone seldon-core-examples git repository
1. Clone the seldon-core-examples git repository
```bash
git clone https://github.com/SeldonIO/seldon-core-examples
```
2. Train and save the H2o model for bad loans prediction:
2. Train and save the H2O model for bad loans prediction:
```bash
cd seldon-core-examples/models/h2o_example/
```
```bash
python train_model.py
````
```
This will train the model and save it in a file named "glm_fit1"" in the same directory.
* Wrap the model:
3. Wrap the model:
```bash
cd ../../
```
```bash
./wrap-model-in-minikube models/h2o_example H2oModel 0.1 seldonio --base-image none/h2obase:0.0 --force
docker run -v models/h2o_example:my_model seldonio/core-python-wrapper:0.4 my_model H2OModel 0.1 myrepo --base-image=H2OBase:1.0
```
This will create a docker image "seldonio/h2omodel:0.1", which is ready to be deployed in seldon-core.
15 changes: 9 additions & 6 deletions docs/wrappers/python.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Packaging a python model for seldon core
In this guide, we illustrate the steps needed to wrap your own python model in a docker image ready for deployment with seldon-core, using the Seldon wrapper script. This script is designed to take your python model and turn it into a dockerised microservice that conforms to Seldon's internal API, thus avoiding the hassle to write your own dockerised microservice.
# Packaging a python model for Seldon Core
In this guide, we illustrate the steps needed to wrap your own python model in a docker image ready for deployment with Seldon Core, using the Seldon wrapper script. This script is designed to take your python model and turn it into a dockerised microservice that conforms to Seldon's internal API, thus avoiding the hassle to write your own dockerised microservice.

You can use these wrappers with any model that offers a python API. Some examples are:
* Scikit-learn
Expand All @@ -23,7 +23,7 @@ To wrap a model, there are 2 requirements:

Additionally, if you are making use of specific python libraries, you need to list them in a requirements.txt file that will be used by pip to install the packages in the docker image.

Here we illustrate the content of the ```keras_mnist``` model folder which can be found in [seldon-core-example/models/](https://github.com/SeldonIO/seldon-core-examples).
Here we illustrate the content of the ```keras_mnist``` model folder which can be found in [seldon-core/examples/models/](https://github.com/SeldonIO/seldon-core/tree/master/examples).

This folder contains the following 3 files:

Expand All @@ -44,7 +44,9 @@ This folder contains the following 3 files:
This methods needs to return a numpy array of predictions."""
return self.model.predict(X)
```

2. requirements.txt: List of the packages required by your model, that will be installed via ```pip install```.

```
keras==2.0.6
h5py==2.7.0
Expand Down Expand Up @@ -114,12 +116,13 @@ Note also that you could use the python script directly if you feel so enclined,
## Build and push the Docker image
A folder named "build" should have appeared in your model directory. It contains all the files needed to build and publish your model's docker image.
To do so, run:
```
cd /path/to/model/dir/build
make build_image
make publish_image
./build_image.sh
./push_image.sh
```
And voila, the docker image for your model is now available in your docker repository, and seldon-core can deploy it into production.
And voila, the docker image for your model is now available in your docker repository, and Seldon Core can deploy it into production.

0 comments on commit 68e3a14

Please sign in to comment.