Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Admission controller should set limit in case of LimitRange with max ratio #1812

Closed
safanaj opened this issue Mar 22, 2019 · 8 comments
Closed
Labels
area/vertical-pod-autoscaler help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@safanaj
Copy link
Contributor

safanaj commented Mar 22, 2019

In case VPA is setting requests to lower values and there is deployed a LimitRange with MaxLimitRequestRatio the pod could be not scheduled because it is not respecting the limts/requests ratio for some resource.

Admission controller should be aware of limit ranges and if needed explicitly decrease limits to respect the max ratio.

@bskiba bskiba added area/vertical-pod-autoscaler help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. labels Mar 22, 2019
@bskiba
Copy link
Member

bskiba commented Mar 22, 2019

Thank you for filing this. I agree this would be a good thing for VPA to check.
One thing I would modify is I think with the current implementation, where VPA doesn't change limits at all, we would want to modify the request to not violate MaxLimitRequestRatio instead of the limit.

@kgolab This is relevant to discussion on VPA controlling limits, Kubernetes offers a way to express the maximum desired request/limit ratio.

@safanaj
Copy link
Contributor Author

safanaj commented Mar 22, 2019

I think that in VPA objects the user can set minAllowed to keep the ratio respected, but in scenario where some pods are really wasting requests VPA should be able to take care of limits (optionally).

I think a scenatio where k8s is offered as PaaS and who manage the cluster have no control over the workloads definition deployed, when using LimitRanger with ratio,default request and limits we can find out that that "static" approach is not fitting well to get high utilization.

Optionally allow VPA to set limits would be helpful to avoid users to implement another webhook that take care of this situation.

@kgolab
Copy link
Collaborator

kgolab commented Mar 22, 2019

Looking at LimitRange spec I think we need one more change in VPA: LimitRange min & max should be treated similar to VPA minAllowed & maxAllowed - the question is whether it should be recommender or admission controller which takes care of it. Personally I'd go for the latter so we keep recommender unconstrained by configuration outside of VPA objects.

OTOH, being used to Unix and its "small utils + pipeline" methodology, I also like an approach of having a separate webhook which takes care of enforcing LimitRange after VPA makes its (current) changes. If we went this route it could take care of MaxLimitRequestRatio too.

@bskiba
Copy link
Member

bskiba commented Mar 22, 2019

LimitRange is also an admission webhook, so it could be configured to run after the VPA one, but it is currently only a validating webhook - it doesn't change the spec to match constraints, just rejects it if incorrect.

@kgolab
Copy link
Collaborator

kgolab commented Mar 22, 2019

What I meant (and what I think @safanaj wrote) was that there would have to be a new mutating webhook for applying LimitRange before it gets enforced. And I actually like the idea of having it instead of adding functionality to VPA's admission controller (not necessarily prefer it).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 20, 2019
@bskiba
Copy link
Member

bskiba commented Jun 25, 2019

/close
fixed by @jbartosik

@k8s-ci-robot
Copy link
Contributor

@bskiba: Closing this issue.

In response to this:

/close
fixed by @jbartosik

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vertical-pod-autoscaler help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants