-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid expesive pointer copy in capi nodegroup #6796
Conversation
/hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice optimization @enxebre , i hesitated a little thinking about the log line but reviewing my old log files i think that line is not as useful as we might have thought originally.
/approve
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: elmiko, enxebre The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
some benchMarks
After
|
/hold cancel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
very cool addition of the benchmarks, thanks!
/lgtm
Avoid expesive pointer copy in capi nodegroup
What type of PR is this?
/kind bug
/kind cleanup
What this PR does / why we need it:
Autoscaler only chooses and scales up a nodegroup each run. Meaning that if you have multiple unschedulable pods targeting different groups each of them (e.g. via nodeSelector), autoscaler would only be able to find a place for them sequentially, therefore having a penalty time (that keeps degrading as the number of pods/nodegroups increases) for the latest pods in "the queue". When having ~90 scalable resources with capi provider this results in long times to provision nodes for latests pods in the "queue" (>40min).
The
func (p *provider) NodeGroups() []cloudprovider.NodeGroup {
is called around 16 times every runOnce loop. Currently it takes ~20 seconds with ~90 MachineSets. This avoids expensive loop and copy pointers resulting in ~5 seconds each NodeGroups call, and so in ~4min saving each run.Which issue(s) this PR fixes:
Mitigates #6784
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: