-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[VPA] Ability to control amount of acceptable throttling #4230
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think this is still a valid feature request and should be reopened. |
/reopen |
@voelzmo: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/reopen |
@alex-berger: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Which component are you using?:
VPA
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
We have a number of workloads (cluster-autoscaler) being a good example, where we have to constantly tweak the request:limit ratio in order to avoid the workload being throttled. This work is toil, and we would like an automated way of handling this. These workloads are cluster critical, and as such, throttling is unacceptable.
Describe the solution you'd like.:
As proposed in this old issue, some way of configuring the VPA to set limits such that a workload is not throttled at least some percentage of the time.
Something like:
The
noThrottlingTarget: 0.99
instructs the VPA to choose CPU limits such that the container's CPU is throttled roughly 99% of the time.Describe any alternative solutions you've considered.:
The current solution is to just set the request:limit ratio very high. e.g.
request: 100m
,limit: 1
. However this feels hacky, because the goal of setting limits is to ensure that runaway processes don't exhaust the CPU of a node.The text was updated successfully, but these errors were encountered: