Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Update] added a note about deprecation of seldon and tensorflow protocol #6182

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
3 changes: 3 additions & 0 deletions doc/source/analytics/explainers.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,9 @@ For an e2e example, please check AnchorTabular notebook [here](../examples/iris_

## Explain API

**Note**: Seldon protocol and TensorFlow protocol are no longer supported, as Seldon has transitioned to the industry-standard Open Inference Protocol (OIP). Customers are encouraged to migrate to OIP, which offers seamless integration across various model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.


For the Seldon Protocol an endpoint path will be exposed for:

```
Expand Down
2 changes: 2 additions & 0 deletions doc/source/graph/protocols.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Protocols

**Note**: Seldon protocol and TensorFlow protocol are no longer supported, as Seldon has transitioned to the industry-standard Open Inference Protocol (OIP). Customers are encouraged to migrate to OIP, which offers seamless integration across various model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.

Tensorflow protocol is only available in version >=1.1.

Seldon Core supports the following data planes:
Expand Down
3 changes: 3 additions & 0 deletions doc/source/production/optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ Using the Seldon python wrapper there are various optimization areas one needs t

### Seldon Protocol Payload Types with REST and gRPC

**Note**: Seldon protocol and TensorFlow protocol are no longer supported, as Seldon has transitioned to the industry-standard Open Inference Protocol (OIP). Customers are encouraged to migrate to OIP, which offers seamless integration across various model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.


Depending on whether you want to use REST or gRPC and want to send tensor data the format of the request will have a deserialization/serialization cost in the python wrapper. This is investigated in a [python serialization notebook](../examples/python_serialization.html).

The conclusions are:
Expand Down
4 changes: 3 additions & 1 deletion doc/source/reference/upgrading.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,10 +93,12 @@ Only the v1 versions of the CRD will be supported moving forward. The v1beta1 ve

### Model Health Checks

**Note**: Seldon protocol and TensorFlow protocol are no longer supported, as Seldon has transitioned to the industry-standard Open Inference Protocol (OIP). Customers are encouraged to migrate to OIP, which offers seamless integration across various model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.

We have updated the health checks done by Seldon for the model nodes in your inference graph. If `executor.fullHealthChecks` is set to true then:
* For Seldon protocol each node will be probed with `/api/v1.0/health/status`.
* For the Open Inference Protocol (or V2 protocol) each node will be probed with `/v2/health/ready`.
* For tensorflow just TCP checks will be run on the http endpoint.
* For the Open Inference Protocol (or V2 protocol) each node will be probed with `/v2/health/ready`.

By default we have set `executor.fullHealthChecks` to false for 1.14 release as users would need to rebuild their custom python models if they have not implemented the `health_status` method. In future we will default to `true`.

Expand Down
5 changes: 4 additions & 1 deletion examples/models/lightgbm_custom_server/iris.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,12 @@
"source": [
"# Custom LightGBM Prepackaged Model Server\n",
"\n",
"**Note**: Seldon protocol and TensorFlow protocol are no longer supported, as Seldon has transitioned to the industry-standard Open Inference Protocol (OIP). Customers are encouraged to migrate to OIP, which offers seamless integration across various model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
"\n",
"\n",
"In this notebook we create a new custom LIGHTGBM_SERVER prepackaged server with two versions:\n",
" * A Seldon protocol LightGBM model server\n",
" * A KfServing V2 protocol version using MLServer for running lightgbm models\n",
" * A KfServing V2 or Open Inference protocol version using MLServer for running lightgbm models\n",
"\n",
"The Seldon model server is in defined in `lightgbmserver` folder.\n",
"\n",
Expand Down
3 changes: 2 additions & 1 deletion notebooks/backwards_compatability.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@
" * curl\n",
" * grpcurl\n",
" * pygmentize\n",
" \n",
"\n",
"**Note**: Seldon protocol and TensorFlow protocol are no longer supported, as Seldon has transitioned to the industry-standard Open Inference Protocol (OIP). Customers are encouraged to migrate to OIP, which offers seamless integration across various model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience. \n",
"\n",
"## Setup Seldon Core\n",
"\n",
Expand Down
2 changes: 2 additions & 0 deletions notebooks/protocol_examples.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
" \n",
"## Examples\n",
"\n",
"**Note**: Seldon protocol and TensorFlow protocol are no longer supported, as Seldon has transitioned to the industry-standard Open Inference Protocol (OIP). Customers are encouraged to migrate to OIP, which offers seamless integration across various model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
"\n",
" * [Seldon Protocol](#Seldon-Protocol-Model)\n",
" * [Tensorflow Protocol](#Tensorflow-Protocol-Model)\n",
" * [V2 Protocol](#V2-Protocol-Model)\n",
Expand Down
2 changes: 2 additions & 0 deletions notebooks/server_examples.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,8 @@
"source": [
"## Serve SKLearn Iris Model\n",
"\n",
"**Note**: Seldon protocol and TensorFlow protocol are no longer supported, as Seldon has transitioned to the industry-standard Open Inference Protocol (OIP). Customers are encouraged to migrate to OIP, which offers seamless integration across various model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.\n",
"\n",
"In order to deploy SKLearn artifacts, we can leverage the [pre-packaged SKLearn inference server](https://docs.seldon.io/projects/seldon-core/en/latest/servers/sklearn.html).\n",
"The exposed API can follow either:\n",
"\n",
Expand Down
Loading