Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] Add setting to scale the processor count used in the model assignment planner #98296

Merged
merged 2 commits into from
Aug 8, 2023

Conversation

davidkyle
Copy link
Member

@davidkyle davidkyle commented Aug 8, 2023

Adds the ml.allocated_processors_scale setting which is used to scale the value of ml.allocated_processors_double

The bulk of the change is passing the settings object to where it is now needed, the only logic change is in MlProcessors.java where the returned processor count is scaled by this setting.

@davidkyle davidkyle added :ml Machine learning auto-backport-and-merge cloud-deploy Publish cloud docker image for Cloud-First-Testing v8.10.0 v8.9.1 labels Aug 8, 2023
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/ml-core (Team:ML)

@elasticsearchmachine elasticsearchmachine added the Team:ML Meta label for the ML team label Aug 8, 2023
@elasticsearchmachine
Copy link
Collaborator

Hi @davidkyle, I've created a changelog YAML for you.

Copy link
Contributor

@jonathan-buttner jonathan-buttner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code changes look good, I'll test shortly when the docker image is up

@@ -63,7 +64,7 @@ class TrainedModelAssignmentRebalancer {
this.deploymentToAdd = Objects.requireNonNull(deploymentToAdd);
}

TrainedModelAssignmentMetadata.Builder rebalance() throws Exception {
TrainedModelAssignmentMetadata.Builder rebalance(Settings settings) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Does it make sense to have Settings be passed in the constructor of TrainedModelAssignmentRebalancer and stored as a member to avoid passing it through a bunch of methods?

@davidkyle davidkyle merged commit 2938673 into elastic:main Aug 8, 2023
@elasticsearchmachine
Copy link
Collaborator

💔 Backport failed

Status Branch Result
8.9 Commit could not be cherrypicked due to conflicts

You can use sqren/backport to manually backport by running backport --upstream elastic/elasticsearch --pr 98296

davidkyle added a commit to davidkyle/elasticsearch that referenced this pull request Aug 8, 2023
…nment planner (elastic#98296)

Adds the ml.allocated_processors_scale setting which is used to scale
the value of ml.allocated_processors_double. This setting influences
the number of model allocations that can fit on a node
# Conflicts:
#	x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/MachineLearning.java
#	x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/autoscaling/MlAutoscalingResourceTracker.java
#	x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/utils/MlProcessors.java
elasticsearchmachine pushed a commit that referenced this pull request Aug 8, 2023
…odel assignment planner (#98299)

* [ML] Add setting to scale the processor count used in the model assignment planner (#98296)

Adds the ml.allocated_processors_scale setting which is used to scale
the value of ml.allocated_processors_double. This setting influences
the number of model allocations that can fit on a node
# Conflicts:
#	x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/MachineLearning.java
#	x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/autoscaling/MlAutoscalingResourceTracker.java
#	x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/utils/MlProcessors.java

* non operator
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cloud-deploy Publish cloud docker image for Cloud-First-Testing >enhancement :ml Machine learning Team:ML Meta label for the ML team v8.9.1 v8.10.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants