-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add provreqOrchestrator that handle ProvReq classes #6627
Add provreqOrchestrator that handle ProvReq classes #6627
Conversation
1145e75
to
1bd7bcc
Compare
type provisioningRequestClient interface { | ||
ProvisioningRequests() ([]*provreqwrapper.ProvisioningRequest, error) | ||
ProvisioningRequest(namespace, name string) (*provreqwrapper.ProvisioningRequest, error) | ||
type scaleUpMode struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
scaleUpMode
seems rather generic.
Maybe checkCapacityOrchestratorMode
?
schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" | ||
) | ||
|
||
type scaleUpMode interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto on the interface name.
Maybe provisioningReqeustOrchestratorMode
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think orchestrator should be in general one, however we have pods and ProvReq, so we have two different orchestrators and a wrapper one (3 in total), so different ProvClasses I introduced as different scaleUp modes. I'm ok name it provisioningRequestScaleUpMode, but IMO it's too big and scaleUpMode is actually is not limited to provreq.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a couple questions about the general delegation setup here:
- Why is
WrapperOrchestrator
defined undercore
instead ofprovisioningrequest
? It seems very ProvReq-specific. - Why do we want to have 2 levels of delegation here? Wouldn't it be easier to understand if we integrated delegating between ProvReq classes into
WrapperOrchestrator
(together with moving it underprovisioningrequest
)? The proposed setup is a ProvReq-specific Orchestrator delegating between the regular scale-up Orchestrator and another ProvReq-specific Orchestrator which then itself delegates between ProvReq-class-specific Orchestrators (which are called Modes for added confusion). - Is the only reason for introducing
scaleUpMode
not having to implement methods we don't care about (i.e.ScaleUpToNodeGroupMinSize
)? If so, IMOprovReqOrchestrator
would be the best name. It's not a mode, it's a component performing something. Plus then the tie to the fullOrchestrator
interface is clear. - Have you thought about turning
WrapperOrchestrator
into a genericDelegatingOrchestrator
? E.g. it would take a set of orchestrators and a set of functions that assign the pods to the orchestrators (e.g. by grouping by theProvisioningRequestPodAnnotationKey
value). And then we'd just have one Orchestrator per-ProvReq-class, and one level of delegation, handled undercore
. This would also make it easier to introduce new classes in downstream CA implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Agree, make sense to move wrapper to provisioningrequest package.
- My goal was to make code easier to understand. I would like to have each object doing a small part of logic. I.e wrapper orchestrator is doing split between regular pods and ProvReq pods and choose the appropriate orchestrator. The provreq orchestrator may have a different implementation for other cloud providers (for example if someone implement their own ProvReq class). The current implementation of ProvReq orchestrator doesn't care about specific ProvReq classes.
- Yes and no :) Sure, provReq orchestrator doesn't implement
ScaleUpToNodeGroupMinSize
method. Another reason to introduce scaleUpMode is because in my mental model orchestrator is smth major and we shouldn't have many orchestrators. I'm fine to change the name to provReqOrchestrator but I think there will be confusion between interface name and provreqorchestrator struct. - Mostly addressed in 2). I wouldn't mix main provReqOrchestator and wrapperOrchestrator.
This would also make it easier to introduce new classes in downstream CA implementations
I don't see how it will make easier to introduce new classes, it should be the same as it is now, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re 2.:
I get aiming for more objects with narrower responsibilities, but hopping between abstraction levels also has a readability cost. And the responsibilities for wrapperOrchestrator
and provReqOrchestator
actually seem quite similar to me - delegate pods to dependent orchestrators based on some pod properties. So while reading I'm actually wondering why they're separate, plus the hop has its own cost.
There are 2 meaningful differences I see between them:
wrapperOrchestrator
splits the pods and delegates to one of them only, whileprovReqOrchestrator
leaves
the splitting to the dependent orchestrators, and calls all of them.
Do we actually the splitting/delegating semantics to be different between the orchestrators? Wouldn'tprovReqOrchestrator
work if it called only one of the dependent orchestrators (e.g. in an LRU fashion), and thenwrapperOrchestrator
could handle both cases?provReqOrchestrator
books the capacity for all dependent orchestrators. Wouldn't it be better to block that capacity also from the regular scale-up? Then again,wrapperOrchestrator
could handle both cases.
Re 3.:
Are there differences in responsibilities/expected semantics between scaleUpMode.ScaleUp()
and Orchestrator.ScaleUp()
? If there aren't, I'd want to highlight that scaleUpMode
is a subset of Orchestrator
, with the same expectations etc. - in which case I'd try to stay as close to the original Orchestrator
name as possible. If there are meaningful differences, I'd change the interface/methods/arguments/comment to highlight these differences, or at least make it not seem like it's a subset of Orchestrator
.
In any case I'd remove the "Mode" part of the name. It suggests that this component changes (parts of) the logic of some other component (and not that some other component will delegate the logic to this component). There's nothing changing in provReqOrchestrator
logic if you provide it with different modes. The overall behavior might change, but not the provReqOrchestrator
logic itself, so the "Mode" part seems misleading to me.
Re 4.:
Right now provReqOrchestrator
doesn't allow downstream implementations to change the set of modes at all, and if it did it would also make sense to export this interface. You also said that other cloud providers might need to implement their own provReqOrchestrator
if they want to implement their own class. This is basically what I'm trying to avoid - I'd like to have a delegating orchestrator generic and robust enough for cloud providers to use it to implement their own classes by just implementing a specific orchestrator for their pods, and providing a function to determine which pods are theirs. With what's proposed, they'd have to re-implement their own provReqOrchestrator
, and then plumb it into wrapperOrchestrator
carefully (splitting the provreq pods between different prov-req specific orchestrators), right?
We can make provReqOrchestrator
configurable enough to achieve the same effect, but then we have to define clear semantics for this interface that the cloud-provider-specific implementations can rely on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved wrapper to provisioningrequest orchestrator package
1b8afb3
to
b09f351
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@kisieland: changing LGTM is restricted to collaborators In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
// ScaleUp run ScaleUp for each Provisionining Class. As of now, CA pick one ProvisioningRequest, | ||
// so only one ProvisioningClass return non empty scaleUp result. | ||
// In case we implement multiple ProvisioningRequest ScaleUp, the function should return combined status | ||
func (o *provReqOrchestrator) ScaleUp( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make this (and other similar places) one line: https://google.github.io/styleguide/go/guide.html#line-length.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The line will be too long. The style guide suggest "If a line feels too long, prefer refactoring instead of splitting it.", but refactoring would complicate this PR.
If we aim readability first, then the current format is more readable than the long line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From another perspective, it artificially introduces vertical space (e.g. this method optically looks ~30% bigger than it actually is because of the added lines, or ScaleUpToNodeGroupMinSize
is an extreme example), and makes it unclear where the arguments end and the function code begins - both of which make it harder to quickly glance through code during debugging or similar flows. But this is a matter of preference and we don't seem to have a consistent style in the codebase, I'll leave it up to you.
cluster-autoscaler/provisioningrequest/checkcapacity/scaleup.go
Outdated
Show resolved
Hide resolved
|
||
// Assuming that all unschedulable pods comes from one ProvisioningRequest. | ||
func (o *checkCapacityScaleUpMode) scaleUp(unschedulablePods []*apiv1.Pod) (bool, error) { | ||
provReq, err := o.client.ProvisioningRequest(unschedulablePods[0].Namespace, unschedulablePods[0].OwnerReferences[0].Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we retrieve the ProvReq from the client again? Looks like we already have it when calling the method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, we can pass provReq to the function directly. However there is a caching on client anyway, so retrieving the provReq should be cheap and less variables passed to the function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but it makes the reader wonder why the code retrieves the PR twice in quick succession (e.g. I'd assume there is no cache in the client, otherwise why would we call it twice in almost the same place, if the result is the same). Not a huge issue, but IMO a bigger one than 1 more parameter passed to a local function.
schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" | ||
) | ||
|
||
type scaleUpMode interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a couple questions about the general delegation setup here:
- Why is
WrapperOrchestrator
defined undercore
instead ofprovisioningrequest
? It seems very ProvReq-specific. - Why do we want to have 2 levels of delegation here? Wouldn't it be easier to understand if we integrated delegating between ProvReq classes into
WrapperOrchestrator
(together with moving it underprovisioningrequest
)? The proposed setup is a ProvReq-specific Orchestrator delegating between the regular scale-up Orchestrator and another ProvReq-specific Orchestrator which then itself delegates between ProvReq-class-specific Orchestrators (which are called Modes for added confusion). - Is the only reason for introducing
scaleUpMode
not having to implement methods we don't care about (i.e.ScaleUpToNodeGroupMinSize
)? If so, IMOprovReqOrchestrator
would be the best name. It's not a mode, it's a component performing something. Plus then the tie to the fullOrchestrator
interface is clear. - Have you thought about turning
WrapperOrchestrator
into a genericDelegatingOrchestrator
? E.g. it would take a set of orchestrators and a set of functions that assign the pods to the orchestrators (e.g. by grouping by theProvisioningRequestPodAnnotationKey
value). And then we'd just have one Orchestrator per-ProvReq-class, and one level of delegation, handled undercore
. This would also make it easier to introduce new classes in downstream CA implementations.
b09f351
to
3152b96
Compare
83a8c7a
to
507f9f7
Compare
cluster-autoscaler/provisioningrequest/orchestrator/wrapper_orchestrator.go
Show resolved
Hide resolved
// ScaleUp run ScaleUp for each Provisionining Class. As of now, CA pick one ProvisioningRequest, | ||
// so only one ProvisioningClass return non empty scaleUp result. | ||
// In case we implement multiple ProvisioningRequest ScaleUp, the function should return combined status | ||
func (o *provReqOrchestrator) ScaleUp( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From another perspective, it artificially introduces vertical space (e.g. this method optically looks ~30% bigger than it actually is because of the added lines, or ScaleUpToNodeGroupMinSize
is an extreme example), and makes it unclear where the arguments end and the function code begins - both of which make it harder to quickly glance through code during debugging or similar flows. But this is a matter of preference and we don't seem to have a consistent style in the codebase, I'll leave it up to you.
|
||
// provReqOrchestrator is an orchestrator that contains orchestrators for all supported Provisioning Classes. | ||
type provReqOrchestrator struct { | ||
initialized bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this parameter supposed to be used for anything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! We need to verify that orchestrator is initialised before ScaleUp
st, err := provClass.Provision(unschedulablePods, nodes, daemonSets, nodeInfos) | ||
errors.Join(combinedError, err) | ||
if st != nil && st.Result != status.ScaleUpNotTried { | ||
orchestratorStatus = st |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we combine the statuses somehow? Especially because we combine the errors? If that's not needed, a comment explaining why would be useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if unschedulablePods from different provClass, provClass.Provision should return scaleUpNotTried and nil error. So it seems we don't need a combined error.
|
||
// Assuming that all unschedulable pods comes from one ProvisioningRequest. | ||
func (o *checkCapacityScaleUpMode) scaleUp(unschedulablePods []*apiv1.Pod) (bool, error) { | ||
provReq, err := o.client.ProvisioningRequest(unschedulablePods[0].Namespace, unschedulablePods[0].OwnerReferences[0].Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but it makes the reader wonder why the code retrieves the PR twice in quick succession (e.g. I'd assume there is no cache in the client, otherwise why would we call it twice in almost the same place, if the result is the same). Not a huge issue, but IMO a bigger one than 1 more parameter passed to a local function.
507f9f7
to
6144bac
Compare
/label tide/merge-method-squash |
Thanks for the change and addressing my comments! |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kisieland, towca, yaroslava-serdiuk The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
* Comment to explain why test is done on STS ownerRef * add informer argument to clusterapi provider builder This change adds the informer factory as an argument to the `buildCloudProvider` function for clusterapi so that building with tags will work properly. * Add informer argument to the CloudProviders builder. * clusterapi: add missing error check * Add instanceType/region support in Helm chart for Hetzner cloud provider * doc: cluster-autoscaler: Oracle provider: Add small security note * doc: cluster-autoscaler: Oracle provider: Add small security note * doc: cluster-autoscaler: Oracle provider: Add small security note * Update charts/cluster-autoscaler/README.md * Update Auto Labels of Subprojects * check empty ProviderID in ali NodeGroupForNode * add gce constructor with custom timeout * update README.md.gotmpl and added Helm docs for Hetzner Cloud * bump chart version * use older helm-docs version and remove empty line in values comment * add missing line breaks * Update charts/cluster-autoscaler/Chart.yaml Co-authored-by: Shubham <[email protected]> * Reduce log spam in AtomicResizeFilteringProcessor Also, introduce default per-node logging quotas. For now, identical to the per-pod ones. * Bump golang in /vertical-pod-autoscaler/pkg/updater Bumps golang from 1.21.6 to 1.22.0. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> * Bump golang in /vertical-pod-autoscaler/pkg/recommender Bumps golang from 1.21.6 to 1.22.0. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> * Bump golang in /vertical-pod-autoscaler/pkg/admission-controller Bumps golang from 1.21.6 to 1.22.0. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> * Update Chart.yaml * Move estimatorBuilder from AutoscalingContext to Orchestrator Init * VPA: bump golang.org/x/net to fix CVE-2023-39325 The version of golang.org/x/net currently used is vulnerable to https://avd.aquasec.com/nvd/2023/cve-2023-39325/, bump it to fix that. * Bump go version. * Fix e2e test setup * helm: enable clusterapi namespace autodiscovery * Fix expectedToRegister to respect instances with nil status * add option to keep node group backoff on OutOfResource error * remove changes to backoff interface * attach errors to scale-up request and add comments * revert optionally keeping node group backoff * remove RemoveBackoff from updateScaleRequests * Add ProvisioningRequestProcessor (kubernetes#6488) * Add kube-env to MigInfoProvider * CA: GCE: add pricing for new Z3 machines * Introduce LocalSSDSizeProvider interface for GCE * Use KubeEnv in gce/templates.go * Add templateName to kube-env to ensure that correct value is cached * Add unit-tests * extract create group to function * Merged PR 1379: added retry for creatingAzureManager in case of throttled requests added retry for forceRefresh in case of throttled requests ran tests MallocNanoZone=0 go test -race k8s.io/autoscaler/cluster-autoscaler/cloudprovider/azure -- passed and commented out unit test -- commented out as it takes 10 minutes to complete func TestCreateAzureManagerWithRetryError(t *testing.T) { ctrl := gomock.NewController(t) defer ctrl.Finish() mockVMClient := mockvmclient.NewMockInterface(ctrl) mockVMSSClient := mockvmssclient.NewMockInterface(ctrl) mockVMSSClient.EXPECT().List(gomock.Any(), "fakeId").Return([]compute.VirtualMachineScaleSet{}, retry.NewError(true, errors.New("test"))).AnyTimes() mockAzClient := &azClient{ virtualMachinesClient: mockVMClient, virtualMachineScaleSetsClient: mockVMSSClient, } manager, err := createAzureManagerInternal(strings.NewReader(validAzureCfg), cloudprovider.NodeGroupDiscoveryOptions{}, config.AutoscalingOptions{}, mockAzClient) assert.Nil(t, manager) assert.NotNil(t, err) } * docs: update outdated/deprecated taints in the examples Refactor references to taints & tolerations, replacing master key with control-plane across all the example YAMLs. Signed-off-by: Feruzjon Muyassarov <[email protected]> * CA FAQ: clarify the point about scheduling constraints blocking scale-down * Add warning about vendor removal to Makefile build target Signed-off-by: Feruzjon Muyassarov <[email protected]> * fix: add missing ephemeral-storage resource definition * Add BuildTestNodeWithAllocatable test utility method. * Add ProvisioningRequest injector (kubernetes#6529) * Add ProvisioningRequests injector * Add test case for Accepted conditions and add supported provreq classes list * Use Passive clock * Consider preemption policy for expandable pods * Fix a bug where atomic scale-down failure could affect subsequent atomic scale-downs * Update gce_price_info.go * Migrate from satori/go.uuid to google/uuid * Delay force refresh by DefaultInterval when OCI GetNodePool call returns 404 * CA: update dependencies to k8s v1.30.0-alpha.3, go1.21.8 * Bump golang in /vertical-pod-autoscaler/pkg/admission-controller Bumps golang from 1.22.0 to 1.22.1. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> * Bump golang in /vertical-pod-autoscaler/pkg/updater Bumps golang from 1.22.0 to 1.22.1. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> * Bump golang in /vertical-pod-autoscaler/pkg/recommender Bumps golang from 1.22.0 to 1.22.1. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> * Update expander options for the AWS cloud provider README * Remove shadow err variable in deleteCreatedNodesWithErros func * fix memory leak in NodeDeleteTracker * CA - Add 1.29 to version compatibility matrix * ClusterAutoscaler: Put APIs in a separate go module Signed-off-by: Yuki Iwai <[email protected]> * Extend update-deps.sh so that we can automatically update k8s libraries in the apis pkg Signed-off-by: Yuki Iwai <[email protected]> * Clean up update-deps.sh Signed-off-by: Yuki Iwai <[email protected]> * Update apis version to v1.29.2 Signed-off-by: Yuki Iwai <[email protected]> * Allow to override rancher provider settings Currently it is only possible to set provider settings over yaml file. This commit introduces env variables to override URL, token and cluster name. If particular environment variable is set it overrides value supplied in yaml file. Signed-off-by: Dinar Valeev <[email protected]> Co-authored-by: Donovan Muller <[email protected]> * Bump VPA version to 1.1.0 * Deprecate the Linode Cluster Autoscaler provider Signed-off-by: Ondrej Kokes <[email protected]> * add price info for n4 * update n4 price info format * Set "pd-balanced" as DefaultBootDiskType It is a default since v1.24 Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/custom-boot-disks#specify * Clarify VPA and HPA limitations Signed-off-by: Luke Addison <[email protected]> * Update ionos-cloud-sdk-go and mocks * Update provider code * Add cloud API request metrics. * Fix and update README * Ignore ionos-cloud-sdk-go spelling * fix n4 price format * Add listManagedInstancesResults to GceCache. * [clusterapi] Do not skip nodegroups with minSize=maxSize * [clusterapi] Update tests for nodegroups with minSize=maxSize * add tests * made changes to support MIGs that use regional instance templates * modified current unit tests to support the new modifications * added comment to InstanceTemplateNameType * Ran hack/go-fmtupdate.h on mig_info_provider_test.go * Use KubeEnv in gce/templates.go * Add templateName to kube-env to ensure that correct value is cached * rebased and resolved conflicts * added fix for unit tests * changed InstanceTemplateNameType to InstanceTemplateName * separated url parser to its own function, created unit test for the function * separated url parser to its own function, created unit test for the function * added unit test with regional MIG * Migrate GCE client to server side operation wait * Track type of node group created/deleted in auto-provisioned group metrics. * trigger tests * fix comment * Add AtomicScaleUp method to NodeGroup interface * Add an option to Cluster Autoscaler that allows triggering new loops more frequently: based on new unschedulable pods and every time a previous iteration was productive. * Refactor StartDeletion usage patterns and enforce periodic scaledown status processor calls. * Bump golang to 1.22 * updated admission-controller to have adjustable --min-tls-version and --tls-ciphers * CA: Move the ProvisioningRequest CRD to apis module Signed-off-by: Yuki Iwai <[email protected]> * Bump default VPA version to 1.1.0 As part of the 1.1.0 release: kubernetes#6388 * Format README * Add chart versions * Add script to update required chart versions in README * Add chart version column in version matrix * Move cluster-autoscaler update-chart-version-readme script to /hack * Only check recent revisions when updating README * Update min cluster-autoscaler chart for Kubernetes 1.29 * Remove unused NodeInfoProcessor * Fix broken link in README.md to point to equinixmetal readme * review comments - simplify retry logic * CA: Before we perform go test, synchronizing go vendor Signed-off-by: Yuki Iwai <[email protected]> * Cleanup ProvReq wrapper * Make the Estimate func accept pods grouped. The grouping should be made by the schedulability equivalence meaning we can introduce optimizations to the binpacking. Introduce a benchmark that estimates capacity needed for 51k pods, which can be grouped to two equivalence groups 50k and 1k. * Update CAPI docs Add a link to the sample manifest and update the image used in the example. Signed-off-by: Lennart Jern <[email protected]> * Bump golang in /vertical-pod-autoscaler/pkg/updater Bumps golang from 1.22.1 to 1.22.2. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> * Bump golang in /vertical-pod-autoscaler/pkg/admission-controller Bumps golang from 1.22.1 to 1.22.2. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> * Bump golang in /vertical-pod-autoscaler/pkg/recommender Bumps golang from 1.22.1 to 1.22.2. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> * Introduce binbacking optimization for similar pods. The optimization uses the fact that pods which are equivalent do not need to be check multiple times against already filled nodes. This changes the time complexity from O(pods*nodes) to O(pods). * CA: Fix apis vendoring * Add g6 EC2 instance type for AWS * Copyright boilerplate * Lower errors verbosity for kube-env label missing * parentController may be nil when owner isn't scalable * Update ProvisioningClass API Group * Fix Autoscaling for worker nodes with invalid ProviderID This change fixes a bug that arises when the user's cluster includes worker nodes not from Hetzner Cloud, such as a Hetzner Dedicated server or any server resource other than Hetzner. It also corrects the behavior when a server has been physically deleted from Hetzner Cloud. Signed-off-by: Maksim Paskal <[email protected]> * Add tests for Pods owner that doesn't implement /scale * Add provreqOrchestrator that handle ProvReq classes (kubernetes#6627) * Add provreqOrchestrator that handle ProvReq classes * Review remarks * Review remarks * Cluster Autoscaler: Sync k8s.io dependencies to k/k v1.30.0, bump Go to 1.22.2 * [v1.30] fix(hetzner): hostname label is not considered The Node Group info we currently return does not include the `kubernetes.io/hostname` label, which is usually set on every node. This causes issues when the user has an unscheduled pod with a `topologySpreadConstraint` on `topologyKey: kubernetes.io/hostname`. cluster-autoscaler is unable to fulfill this constraint and does not scale up any of the node groups. Related to kubernetes#6715 * Remove the flag for enabling ProvisioningRequests The API is not stable yet, we don't want people to depend on the current version. * fix: scale up broken for providers not implementing NodeGroup.GetOptions() Properly handle calls to `NodeGroup.GetOptions()` that return `cloudprovider.ErrNotImplemented` in the scale up path. * Add --enable-provisioning-requests flag * [cluster-autoscaler-release-1.30] Fix ProvisioningRequest update (kubernetes#6825) * Fix ProvisioningRequest update * Review remarks --------- Co-authored-by: Yaroslava Serdiuk <[email protected]> * Update k/k vendor to 1.30.1 for CA 1.30 * sync changes * added sync changes file * golint fix * update vpa vendor * fixed volcengine * ran gofmt * synched azure * synched azure * synched IT * removed IT log file * addressed review comments --------- Signed-off-by: dependabot[bot] <[email protected]> Signed-off-by: Feruzjon Muyassarov <[email protected]> Signed-off-by: Yuki Iwai <[email protected]> Signed-off-by: Dinar Valeev <[email protected]> Signed-off-by: Ondrej Kokes <[email protected]> Signed-off-by: Luke Addison <[email protected]> Signed-off-by: Lennart Jern <[email protected]> Signed-off-by: Maksim Paskal <[email protected]> Co-authored-by: Kubernetes Prow Robot <[email protected]> Co-authored-by: David Benque <[email protected]> Co-authored-by: michael mccune <[email protected]> Co-authored-by: shubham82 <[email protected]> Co-authored-by: Markus Lehtonen <[email protected]> Co-authored-by: Niklas Rosenstein <[email protected]> Co-authored-by: Ky-Anh Huynh <[email protected]> Co-authored-by: Niklas Rosenstein <[email protected]> Co-authored-by: Guy Templeton <[email protected]> Co-authored-by: daimaxiaxie <[email protected]> Co-authored-by: daimaxiaxie <[email protected]> Co-authored-by: Michal Pitr <[email protected]> Co-authored-by: Daniel Kłobuszewski <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Artur Żyliński <[email protected]> Co-authored-by: Alvaro Aleman <[email protected]> Co-authored-by: Marco Voelz <[email protected]> Co-authored-by: Jack Francis <[email protected]> Co-authored-by: Yarin Miran <[email protected]> Co-authored-by: Will Bowers <[email protected]> Co-authored-by: Yaroslava Serdiuk <[email protected]> Co-authored-by: Bartłomiej Wróblewski <[email protected]> Co-authored-by: Anish Shah <[email protected]> Co-authored-by: Mahmoud Atwa <[email protected]> Co-authored-by: pawel siwek <[email protected]> Co-authored-by: Miranda Craghead <[email protected]> Co-authored-by: Feruzjon Muyassarov <[email protected]> Co-authored-by: Kuba Tużnik <[email protected]> Co-authored-by: Johnnie Ho <[email protected]> Co-authored-by: Walid Ghallab <[email protected]> Co-authored-by: Karol Wychowaniec <[email protected]> Co-authored-by: oksanabaza <[email protected]> Co-authored-by: Vijay Bhargav Eshappa <[email protected]> Co-authored-by: David <[email protected]> Co-authored-by: Damika Gamlath <[email protected]> Co-authored-by: Ashish Pani <[email protected]> Co-authored-by: Yuki Iwai <[email protected]> Co-authored-by: Dinar Valeev <[email protected]> Co-authored-by: Donovan Muller <[email protected]> Co-authored-by: Luiz Antonio <[email protected]> Co-authored-by: Ondrej Kokes <[email protected]> Co-authored-by: Yuan <[email protected]> Co-authored-by: Luke Addison <[email protected]> Co-authored-by: Mario Valderrama <[email protected]> Co-authored-by: Max Fedotov <[email protected]> Co-authored-by: Daniel-Redeploy <[email protected]> Co-authored-by: Edwinhr716 <[email protected]> Co-authored-by: Maksym Fuhol <[email protected]> Co-authored-by: Allen Mun <[email protected]> Co-authored-by: mewa <[email protected]> Co-authored-by: Aayush Rangwala <[email protected]> Co-authored-by: prachigandhi <[email protected]> Co-authored-by: Daniel Gutowski <[email protected]> Co-authored-by: Lennart Jern <[email protected]> Co-authored-by: mendelski <[email protected]> Co-authored-by: ceuity <[email protected]> Co-authored-by: Maksim Paskal <[email protected]> Co-authored-by: Julian Tölle <[email protected]> Co-authored-by: k8s-infra-cherrypick-robot <[email protected]>
What type of PR is this?
/kind feature
What this PR does / why we need it:
Part of implemenation of https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/proposals/provisioning-request.md
Special notes for your reviewer:
Does this PR introduce a user-facing change?