Skip to content

Commit

Permalink
Add draft doc for model interface (#7086)
Browse files Browse the repository at this point in the history
* Add draft doc for model interface

Signed-off-by: Sicheng Song <[email protected]>

* address concern

Signed-off-by: Sicheng Song <[email protected]>

* Apply suggestions from code review

Co-authored-by: Nathan Bower <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>

* Update _ml-commons-plugin/api/model-apis/register-model.md

Co-authored-by: Nathan Bower <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>

---------

Signed-off-by: Sicheng Song <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
Co-authored-by: Naarcha-AWS <[email protected]>
Co-authored-by: Nathan Bower <[email protected]>
  • Loading branch information
3 people authored May 8, 2024
1 parent 6397916 commit 639cb38
Show file tree
Hide file tree
Showing 2 changed files with 102 additions and 11 deletions.
112 changes: 101 additions & 11 deletions _ml-commons-plugin/api/model-apis/register-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,7 @@ Field | Data type | Required/Optional | Description
`version` | String | Required | The model version. |
`model_format` | String | Required | The portable format of the model file. Valid values are `TORCH_SCRIPT` and `ONNX`. |
`description` | String | Optional| The model description. |
`model_group_id` | String | Optional | The model group ID of the model group to register this model to.
`is_enabled`| Boolean | Specifies whether the model is enabled. Disabling the model makes it unavailable for Predict API requests, regardless of the model's deployment status. Default is `true`.
`model_group_id` | String | Optional | The ID of the model group to which to register the model.

#### Example request: OpenSearch-provided text embedding model

Expand Down Expand Up @@ -89,8 +88,7 @@ Field | Data type | Required/Optional | Description
`model_content_hash_value` | String | Required | The model content hash generated using the SHA-256 hashing algorithm.
`url` | String | Required | The URL that contains the model. |
`description` | String | Optional| The model description. |
`model_group_id` | String | Optional | The model group ID of the model group to register this model to.
`is_enabled`| Boolean | Specifies whether the model is enabled. Disabling the model makes it unavailable for Predict API requests, regardless of the model's deployment status. Default is `true`.
`model_group_id` | String | Optional | The ID of the model group to which to register this model.

#### Example request: OpenSearch-provided sparse encoding model

Expand Down Expand Up @@ -124,7 +122,9 @@ Field | Data type | Required/Optional | Description
`url` | String | Required | The URL that contains the model. |
`description` | String | Optional| The model description. |
`model_group_id` | String | Optional | The model group ID of the model group to register this model to.
`is_enabled`| Boolean | Specifies whether the model is enabled. Disabling the model makes it unavailable for Predict API requests, regardless of the model's deployment status. Default is `true`.
`is_enabled`| Boolean | Optional | Specifies whether the model is enabled. Disabling the model makes it unavailable for Predict API requests, regardless of the model's deployment status. Default is `true`.
`rate_limiter` | Object | Optional | Limits the number of times that any user can call the Predict API on the model. For more information, see [Rate limiting inference calls]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#rate-limiting-inference-calls).
`interface`| Object | Optional | The interface for the model. For more information, see [Interface](#the-interface-parameter).|

#### The `model_config` object

Expand Down Expand Up @@ -182,8 +182,10 @@ Field | Data type | Required/Optional | Description
`connector` | Object | Required | Contains specifications for a connector for a model hosted on a third-party platform. For more information, see [Creating a connector for a specific model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/remote-models/connectors/#creating-a-connector-for-a-specific-model). You must provide either `connector_id` or `connector`.
`description` | String | Optional| The model description. |
`model_group_id` | String | Optional | The model group ID of the model group to register this model to.
`is_enabled`| Boolean | Specifies whether the model is enabled. Disabling the model makes it unavailable for Predict API requests, regardless of the model's deployment status. Default is `true`.
`is_enabled`| Boolean | Optional | Specifies whether the model is enabled. Disabling the model makes it unavailable for Predict API requests, regardless of the model's deployment status. Default is `true`.
`rate_limiter` | Object | Optional | Limits the number of times that any user can call the Predict API on the model. For more information, see [Rate limiting inference calls]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#rate-limiting-inference-calls).
`guardrails`| Object | Optional | The guardrails for the model input. For more information, see [Guardrails](#the-guardrails-parameter).|
`interface`| Object | Optional | The interface for the model. For more information, see [Interface](#the-interface-parameter).|

#### Example request: Externally hosted with a standalone connector

Expand Down Expand Up @@ -240,12 +242,13 @@ POST /_plugins/_ml/models/_register

#### Example response

OpenSearch responds with the `task_id` and task `status`.
OpenSearch responds with the `task_id`, task `status`, and `model_id`:

```json
{
"task_id" : "ew8I44MBhyWuIwnfvDIH",
"status" : "CREATED"
"status" : "CREATED",
"model_id": "t8qvDY4BChVAiNVEuo8q"
}
```

Expand Down Expand Up @@ -304,12 +307,99 @@ For a complete example, see [Guardrails]({{site.url}}{{site.baseurl}}/ml-commons

#### Example response

OpenSearch responds with the `task_id` and task `status`:
OpenSearch responds with the `task_id`, task `status`, and `model_id`:

```json
{
"task_id": "tsqvDY4BChVAiNVEuo8F",
"status": "CREATED",
"model_id": "t8qvDY4BChVAiNVEuo8q"
}
```

### The `interface` parameter

The model interface provides a highly flexible way to add arbitrary metadata annotations to all local deep learning models and remote models in a JSON schema syntax. This annotation initiates a validation check on the input and output fields of the model during the model's invocation. The validation check ensures that the input and output fields are in the correct format both before and after the model performs a prediction.
To register a model with a model interface, provide the `interface` parameter, which supports the following fields.

Field | Data type | Description
:--- | :--- |:------------------------------------
`input`| Object | The JSON schema for the model input. |
`output`| Object | The JSON schema for the model output. |

The input and output fields will be evaluated against the separately provided JSON schema. You do not necessarily need to provide both input and output fields simultaneously.

To learn more about the JSON schema syntax, see [Understanding JSON Schema](https://json-schema.org/understanding-json-schema/).

#### Example request: Externally hosted model with an interface

```json
POST /_plugins/_ml/models/_register
{
"name": "openAI-gpt-3.5-turbo",
"function_name": "remote",
"description": "test model",
"connector_id": "A-j7K48BZzNMh1sWVdJu",
"interface": {
"input": {
"properties": {
"parameters": {
"properties": {
"messages": {
"type": "string",
"description": "This is a test description field"
}
}
}
}
},
"output": {
"properties": {
"inference_results": {
"type": "array",
"items": {
"type": "object",
"properties": {
"output": {
"type": "array",
"items": {
"properties": {
"name": {
"type": "string",
"description": "This is a test description field"
},
"dataAsMap": {
"type": "object",
"description": "This is a test description field"
}
}
},
"description": "This is a test description field"
},
"status_code": {
"type": "integer",
"description": "This is a test description field"
}
}
},
"description": "This is a test description field"
}
}
}
}
}
```
{% include copy-curl.html %}

#### Example response

OpenSearch responds with the `task_id`, task `status`, and `model_id`:

```json
{
"task_id" : "ew8I44MBhyWuIwnfvDIH",
"status" : "CREATED"
"task_id": "tsqvDY4BChVAiNVEuo8F",
"status": "CREATED",
"model_id": "t8qvDY4BChVAiNVEuo8q"
}
```

Expand Down
1 change: 1 addition & 0 deletions _ml-commons-plugin/api/model-apis/update-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ Field | Data type | Description
`rate_limiter.limit` | Integer | The maximum number of times any user can call the Predict API on the model per `unit` of time. By default, there is no limit on the number of Predict API calls. Once you set a limit, you cannot reset it to no limit. As an alternative, you can specify a high limit value and a small time unit, for example, 1 request per nanosecond.
`rate_limiter.unit` | String | The unit of time for the rate limiter. Valid values are `DAYS`, `HOURS`, `MICROSECONDS`, `MILLISECONDS`, `MINUTES`, `NANOSECONDS`, and `SECONDS`.
`guardrails`| Object | The guardrails for the model.
`interface`| Object | The interface for the model.

#### Example request: Disabling a model

Expand Down

0 comments on commit 639cb38

Please sign in to comment.