-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BookCapacity for ProvisioningRequest pods #6880
BookCapacity for ProvisioningRequest pods #6880
Conversation
cluster-autoscaler/processors/provreq/provisioning_request_processor.go
Outdated
Show resolved
Hide resolved
cluster-autoscaler/processors/provreq/provisioning_request_processor.go
Outdated
Show resolved
Hide resolved
0b8fdab
to
c2c676e
Compare
7ac217c
to
09ba963
Compare
/lgtm |
@kisieland: changing LGTM is restricted to collaborators In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
||
// BookCapacity schedule fake pods for ProvisioningRequest that should have reserved capacity | ||
// in the cluster. | ||
func (p *provReqProcessor) BookCapacity(ctx *context.AutoscalingContext) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be implemented as PodListProcessor.Process() rather than introducing a new call to StaticAutoscaler:
- It's literally doing what PLP is meant to do - changing the list of pods to be processed by CA (by injecting new ones).
- Scheduling pods on existing nodes is also generally done in PLP - FilterOutSchedulable is the main place we do it.
- Injecting pods in PLP in a similar way is a pretty well established pattern in CA forks - I know you have access to GKE fork, you can see how CapacityRequests do pretty much the same thing in a PLP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PodListProcessor is processing the list of unschedulable pods and applying changes to it. Since booking capacity does nothing to unschedulable pods list and just modifying cluster snapshot.
Sure, I can implement booking capacity as a part of PodListProcessor and just do nothing with unschedulable list, is it what you asking? However I don't see the advantage of this approach.
) | ||
|
||
// CapacityReservation is interface to reserve capacity in the cluster. | ||
type CapacityReservation interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my other comment - this is already generally done by PodListProcessors and I think PLP are better suited to the job: in most use-cases where you book capacity you also want to add any pods that don't fit to the list of pending pods in order to trigger scale-up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in most use-cases where you book capacity you also want to add any pods that don't fit to the list of pending pods in order to trigger scale-up
I explicitly don't want to add any pods that don't fit to the list of pending pods, because the scale-up for ProvReq was already triggered, we just reserve capacity in the simple way by creating fake pods for ProvReq. In fact real pods could be already created, so fake pods won't fit in the cluster and this is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, what I mean is that injecting in-memory pods is a common pattern already. This includes both pods that end up in the list of unschedulable pods and pods added directly to snapshot.
In most cases when you inject pods you expect whatever fits in snapshot to go there and what is left to trigger scale-up. Your use-case only involves modifying the snapshot and not the list of pending pods, which makes it slightly unusual. But I think it's better to still do it in PLP, even if it doesn't modify the lists of pods:
- It is consistent with other implementations
- What would be the expectation for future pod-injecting features that follow the pattern of "scheduling" as much as possible in cluster snapshot and adding leftover to list of unschedulable pods? Should those be implemented as PLP or the new processor? They both modify the list of pods and book capacity - and arguably so does FilterOutSchedulable which is a well established part of core CA logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented PLP
09ba963
to
69b8c9f
Compare
69b8c9f
to
830bbb2
Compare
/pony |
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: MaciekPytel, yaroslava-serdiuk The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/unhold |
/cherry-pick cluster-autoscaler-release-1.30 |
@yaroslava-serdiuk: new pull request created: #7057 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind feature
This is needed to prevent ScaleDown for ProvisioningRequest during booking time.
Fixes #6517
Complete implementation for #6815