Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[local] NodeInfoPodTemplateProcessor implementation #47

Conversation

clamoriniere
Copy link

@clamoriniere clamoriniere commented Aug 24, 2021

Based on kubernetes#3964, this PR only implement the mandatory elements to work in a Datadog environment.

I tried to build the commit to ease any futur rebase of https://github.com/kubernetes/autoscaler.

  • [local] Add podTemplateProcessor option
  • [local] Add NodeInfos PodTemplates process implementation
  • [local] set PodTemplateProcessor as NodeInfoProcess if enabled

@clamoriniere clamoriniere requested a review from bpineau August 24, 2021 08:28
@clamoriniere clamoriniere force-pushed the clamoriniere/PodTemplateProcessorInternal branch from 5424840 to d97bbbc Compare August 24, 2021 08:54
@clamoriniere clamoriniere marked this pull request as ready for review August 24, 2021 09:44
@bpineau bpineau force-pushed the datadog-master-6.0 branch from 9cf6596 to 5528a72 Compare August 26, 2021 09:36
Support per-VMSS (scaledown) settings as permited by the
cloudprovider's interface `GetOptions()` method.
This is meant to isolate, contain and wrap access to our custom
resource (storageclass/local-data) and label (/local-storage)
in a single place, separated in our processors/datadog/ namespace.

Avoid spreading local names and constraints over all the code base.
Main functional change compared to upstream's is the "storageclass/local-data"
hack, meant to support pods claiming local-data (no-provisioner) volumes.

filterOutSchedulable (+test) is mostly that of upstream (copied for later
modifications), lightly modified to compose with older clusters where taint
based eviction isn't enabled.

The frontend is meant to hookin future sub-processors for pods.
This gives a foothold for some local-only improvements, without touching upstream code; plans includes:
* Runaway upscale prevention
* Schedulable metrics (pending pods pressure, long pending pods)
* Pods labels selectors (for dedicated autoscaler instances)
* Possibly: unpriorize pods from cronjobs
Hooking it in the less intrusive way we could.
Goals of that PodListProcessor are twofold:
* Lower presure on runonce loops by evaluating long pending pods less frequently
* More importantly: free autoscaler cycles to recover from scaledown cooldown, so nodes created for pods causing infinite upscales get gc'ed rather than filling a cluster (and those pods are slowed down)

The delay penalty could be made progressive (eg. 2m then 5m then 10m etc), but for now a static value makes evaluating benefits and impacts easier.

Metrics to come in follow-up PR.
This is meant to replace a patch that was setting new nodes as NotReady
until their lvp pod was there, with something bound to and contained in
our podsListProcessor, not touching autoscaler core or cloudproviders at
all anymore. Decision is entirely based of the local-data:true label: no
guessing or per cloud provider instances types allowlists needed anymore.

Instead of setting them NotReady, the new nodes that just joined are now
considered as schedulable for pods requesting local-data once they are
ready, which naturaly prevents spurious re-upscales.

For now the change is restricted to new local-data nodes that just became
ready for less than 5mn, as we're assessing wider impact.

The downside is we need to modify the clusterSnapshot content before
filterOutSchedulable runs scheduler predicates with those nodes, which
happens later, also in our own podsListProcessor.
And stop trying to make that a nodeinfo processor: this was an attempt
to follow upstream suggestions, but excessively intrusive (not rebase
friendly) for a feature we might keep localy/forked for a long time.

We're also submiting a "nodeinfos provider processor" to upstream, which
(if accepted) will help integrate that kind of changes much more cleanly
(eg. not breaking tests, not touching core/). For now, let's assume we
might not have an upstream  processor entry point for a long time.
Hooking it in the less intrusive way we could.
We'll work with upstream for a proper "nodeinfos processor" entrypoint
(and revisit this, depending on the outcome).
`FetchMigTemplate()` calls are costly, causing two cloud API calls
(InstanceGroupManagersService.Get and InstanceTemplatesService.Get).

Due to that, expiring all instance templates at once results in very
noticeable latency spikes, and exposes to throttling, especially for
clusters having many MIGs attached.
Support per-ASG (scaledown) settings as permited by the
cloudprovider's interface GetOptions() method.
@bpineau bpineau deleted the branch datadog-master-6.0 September 18, 2021 18:06
@bpineau bpineau closed this Sep 18, 2021
@clamoriniere clamoriniere reopened this Sep 20, 2021
@clamoriniere clamoriniere force-pushed the clamoriniere/PodTemplateProcessorInternal branch from d97bbbc to 4a00f15 Compare September 20, 2021 10:13
@clamoriniere clamoriniere force-pushed the clamoriniere/PodTemplateProcessorInternal branch from 4a00f15 to e7a47e3 Compare September 20, 2021 10:23
@clamoriniere
Copy link
Author

clamoriniere commented Sep 20, 2021

I "rebased" the PR on top of the new datadog-master-6.0 branch

@bpineau bpineau force-pushed the datadog-master-6.0 branch 2 times, most recently from 618c5a3 to b6de6ee Compare September 20, 2021 13:08
@clamoriniere
Copy link
Author

closed in favour of #50

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants