-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: InPlacePodVerticalScaling support #829
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Description
What problem are you trying to solve?
Currently, Karpenter make scheduling and disruption decisions with the assumption that pod resource requests are immutable. Since Kubernetes 1.27, InPlacePodVerticalScaling was introduced as a new alpha feature, It's targeting beta post 1.30. With InPlacePodVerticalScaling, pod resource requests and limits become mutable.
A common use case of InPlacePodVerticalScaling is to mitigate startup issues for heavy application like Java, that big resource requests is initially allocated at startup, when the Pod becomes ready, a controller then lowers the resource requests to free up resources on the node.
With current Karpenter implementation, it is possible to create a loop that
Karpenter need to be updated to recognize the mutable resource requests, to prevent such loop. Due to the flexibility of current InPlacePodVerticalScaling, this might be a difficult task if Karpenter itself doesn't understand the resource request mutation strategy.
How important is this feature to you?
Many users have shown interest in InPlacePodVerticalScaling feature. aws/containers-roadmap#512 might provide some datapoint. As a cluster autoscaler, Karpenter's awareness of InPlacePodVerticalScaling is critical so together the node usage efficiency can be further improved while keeping applications stable and performant.
The text was updated successfully, but these errors were encountered: