Skip to content

Commit

Permalink
Get capi targetsize from cache
Browse files Browse the repository at this point in the history
This ensured that access to replicas during scale down operations were never stale by accessing the API server #3104.
This honoured that behaviour while moving to unstructured client #3312.
This regressed that behaviour while trying to reduce the API server load #4443.
This put back the never stale replicas behaviour at the cost of loading back the API server #4634.

Currently on e.g a 48 minutes cluster it does 1.4k get request to the scale subresource.
This PR tries to satisfy both non stale replicas during scale down and prevent the API server from being overloaded. To achieve that it lets targetSize which is called on every autoscaling cluster state loop from come from cache.

Also note that the scale down implementation has changed https://github.com/kubernetes/autoscaler/commits/master/cluster-autoscaler/core/scaledown.
  • Loading branch information
enxebre committed Jul 13, 2022
1 parent db5e2f2 commit a80af8e
Showing 1 changed file with 9 additions and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ package clusterapi

import (
"fmt"
"github.com/pkg/errors"

corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
Expand Down Expand Up @@ -52,7 +53,14 @@ func (ng *nodegroup) MaxSize() int {
// (new nodes finish startup and registration or removed nodes are
// deleted completely). Implementation required.
func (ng *nodegroup) TargetSize() (int, error) {
return ng.scalableResource.Replicas()
replicas, found, err := unstructured.NestedInt64(ng.scalableResource.unstructured.Object, "spec", "replicas")
if err != nil {
return 0, errors.Wrap(err, "error getting replica count")
}
if !found {
replicas = 0
}
return int(replicas), nil
}

// IncreaseSize increases the size of the node group. To delete a node
Expand Down

0 comments on commit a80af8e

Please sign in to comment.