Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Serverless] Adds Trained model autoscaling page #139

Merged
merged 21 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
c0564eb
Adds Trained model autoscaling page
kosabogi Oct 24, 2024
d8dde76
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
f6b650d
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
4683ddc
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
f91c731
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
f17b2a4
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
26fb9e0
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
c8b8b3b
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
55117e6
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
ac3a201
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
18487de
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
e430ce7
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
27b445c
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 25, 2024
8f7a9d2
Changes paragraph placement
kosabogi Oct 25, 2024
1beb41c
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 30, 2024
031c91f
Update serverless/pages/ml-nlp-auto-scale.mdx
kosabogi Oct 31, 2024
9fdee40
Updates document based on feedback
kosabogi Nov 6, 2024
ad99fd3
Merge remote-tracking branch 'upstream/main' into add-serverless-mode…
kosabogi Nov 6, 2024
1b3caa8
Merge branch 'main' into add-serverless-model-autoscaling-doc
colleenmcginnis Nov 6, 2024
37ee519
mdx to asciidoc
colleenmcginnis Nov 6, 2024
aaf7ae9
Updates table
kosabogi Nov 8, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added serverless/images/ml-nlp-deployment.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions serverless/index-serverless-general.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,5 @@ include::./pages/service-status.asciidoc[leveloffset=+2]
include::./pages/user-profile.asciidoc[leveloffset=+2]

include::./pages/cloud-regions.asciidoc[leveloffset=+2]

include::./pages/ml-nlp-auto-scale.asciidoc[leveloffset=+2]
207 changes: 207 additions & 0 deletions serverless/pages/ml-nlp-auto-scale.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
[[general-ml-nlp-auto-scale]]
= Trained model autoscaling

// :keywords: serverless

You can enable autoscaling for each of your trained model deployments.
Autoscaling allows Elasticsearch to automatically adjust the resources the model deployment can use based on the workload demand.

There are two ways to enable autoscaling:

* through APIs by enabling adaptive allocations
* in Kibana by enabling adaptive resources


Trained model autoscaling is available for both serverless and Cloud deployments. In serverless deployments, processing power is managed differently across Search, Observability, and Security projects, which impacts their costs and resource limits.

Security and Observability projects are only charged for data ingestion and retention. They are not charged for processing power (VCU usage), which is used for more complex operations, like running advanced search models. For example, in Search projects, models such as ELSER require significant processing power to provide more accurate search results.

[discrete]
[[enabling-autoscaling-through-apis-adaptive-allocations]]
== Enabling autoscaling through APIs - adaptive allocations

Model allocations are independent units of work for NLP tasks.
If you set a static number of allocations, they remain constant even when not all the available resources are fully used or when the load on the model requires more resources.
Instead of setting the number of allocations manually, you can enable adaptive allocations to set the number of allocations based on the load on the process.
This can help you to manage performance and cost more easily.
(Refer to the https://cloud.elastic.co/pricing[pricing calculator] to learn more about the possible costs.)

When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load.
When the load is high, additional model allocations are automatically created as needed.
When the load is low, a model allocation is automatically removed.
You can explicitly set the minimum and maximum number of allocations; autoscaling will occur within these limits.

[NOTE]
====
If you set the minimum number of allocations to 1, you will be charged even if the system is not using those resources.
====

You can enable adaptive allocations by using:

* the create inference endpoint API for https://www.elastic.co/guide/en/elasticsearch/reference/master/infer-service-elser.html[ELSER], https://www.elastic.co/guide/en/elasticsearch/reference/master/infer-service-elasticsearch.html[E5 and models uploaded through Eland] that are used as inference services.
* the https://www.elastic.co/guide/en/elasticsearch/reference/master/start-trained-model-deployment.html[start trained model deployment] or https://www.elastic.co/guide/en/elasticsearch/reference/master/update-trained-model-deployment.html[update trained model deployment] APIs for trained models that are deployed on machine learning nodes.

If the new allocations fit on the current machine learning nodes, they are immediately started.
If more resource capacity is needed for creating new model allocations, then your machine learning node will be scaled up if machine learning autoscaling is enabled to provide enough resources for the new allocation.
The number of model allocations can be scaled down to 0.
They cannot be scaled up to more than 32 allocations, unless you explicitly set the maximum number of allocations to more.
Adaptive allocations must be set up independently for each deployment and https://www.elastic.co/guide/en/elasticsearch/reference/master/put-inference-api.html[inference endpoint].

When you create inference endpoints on Serverless using Kibana, adaptive allocations are automatically turned on, and there is no option to disable them.

[discrete]
[[optimizing-for-typical-use-cases]]
=== Optimizing for typical use cases

You can optimize your model deployment for typical use cases, such as search and ingest.
When you optimize for ingest, the throughput will be higher, which increases the number of inference requests that can be performed in parallel.
When you optimize for search, the latency will be lower during search processes.

* If you want to optimize for ingest, set the number of threads to `1` (`"threads_per_allocation": 1`).
* If you want to optimize for search, set the number of threads to greater than `1`.
Increasing the number of threads will make the search processes more performant.

[discrete]
[[enabling-autoscaling-in-kibana-adaptive-resources]]
== Enabling autoscaling in Kibana - adaptive resources

You can enable adaptive resources for your models when starting or updating the model deployment.
Adaptive resources make it possible for Elasticsearch to scale up or down the available resources based on the load on the process.
This can help you to manage performance and cost more easily.
When adaptive resources are enabled, the number of VCUs that the model deployment uses is set automatically based on the current load.
When the load is high, the number of VCUs that the process can use is automatically increased.
When the load is low, the number of VCUs that the process can use is automatically decreased.

You can choose from three levels of resource usage for your trained model deployment; autoscaling will occur within the selected level's range.

Refer to the tables in the auto-scaling-matrix section to find out the setings for the level you selected.

image::images/ml-nlp-deployment.png[ML model deployment with adaptive resources enabled.]

Search projects are given access to more processing resources, while Security and Observability projects have lower limits. This difference is reflected in the UI configuration: Search projects have higher resource limits compared to Security and Observability projects to accommodate their more complex operations.

On Serverless, adaptive allocations are automatically enabled for all project types.
However, the "Adaptive resources" control is not displayed in Kibana for Observability and Security projects.

[discrete]
[[model-deployment-resource-matrix]]
== Model deployment resource matrix

The used resources for trained model deployments depend on three factors:

* your cluster environment (Serverless, Cloud, or on-premises)
* the use case you optimize the model deployment for (ingest or search)
* whether model autoscaling is enabled with adaptive allocations/resources to have dynamic resources, or disabled for static resources

The following tables show you the number of allocations, threads, and VCUs available on Serverless when adaptive resources are enabled or disabled.

[discrete]
[[deployments-on-serverless-optimized-for-ingest]]
=== Deployments on serverless optimized for ingest

In case of ingest-optimized deployments, we maximize the number of model allocations.

[discrete]
[[adaptive-resources-enabled]]
==== Adaptive resources enabled

|===
| Level | Allocations | Threads | VCUs

| Low
| 0 to 2 dynamically
| 1
| 0 to 16 dynamically

| Medium
| 1 to 32 dynamically
| 1
| 8 to 256 dynamically

| High
a| 1 to 512 for Search +
1 to 128 for Security and Observability
| 1
a| 8 to 4096 for Search +
8 to 1024 for Security and Observability
|===


[discrete]
[[adaptive-resources-disabled-search-only]]
==== Adaptive resources disabled (Search only)

|===
| Level | Allocations | Threads | VCUs

| Low
| Exactly 2
| 1
| 16

| Medium
| Exactly 32
| 1
| 256

| High
a| 512 for Search +
No static allocations for Security and Observability
| 1
a| 4096 for Search +
No static allocations for Security and Observability
|===

[discrete]
[[deployments-on-serverless-optimized-for-search]]
=== Deployments on serverless optimized for search

[discrete]
[[adaptive-resources-enabled-for-search]]
==== Adaptive resources enabled

|===
| Level | Allocations | Threads | VCUs

| Low
| 0 to 1 dynamically
| Always 2
| 0 to 16 dynamically

| Medium
| 1 to 2 (if threads=16), dynamically
| Maximum (for example, 16)
| 8 to 256 dynamically

| High
a| 1 to 32 (if threads=16), dynamically +
1 to 128 for Security and Observability
| Maximum (for example, 16)
a| 8 to 4096 for Search +
8 to 1024 for Security and Observability
|===

[discrete]
[[adaptive-resources-disabled-for-search]]
==== Adaptive resources disabled

|===
| Level | Allocations | Threads | VCUs

| Low
| 1 statically
| Always 2
| 16

| Medium
| 2 statically (if threads=16)
| Maximum (for example, 16)
| 256

| High
a| 32 statically (if threads=16) for Search +
No static allocations for Security and Observability
| Maximum (for example, 16)
a| 4096 for Search +
No static allocations for Security and Observability
|===