-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ac take care of limit range #1813
Ac take care of limit range #1813
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
2403a97
to
98d2a05
Compare
Thank you for this PR. There is good work here. We will review early next week, still discussing some points. |
@@ -0,0 +1,251 @@ | |||
/* | |||
Copyright 2018 The Kubernetes Authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2019
} | ||
|
||
type interestingData struct { | ||
// Min v1.ResourceList |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be part of this PR?
|
||
var _ LimitsChecker = &limitsChecker{} | ||
|
||
// NewLimitsChecker create a LimitsChecker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: creates
var _ LimitsChecker = &neverNeedsLimitsChecker{} | ||
|
||
func (lc *neverNeedsLimitsChecker) NeedsLimits(pod *v1.Pod, containersResources []ContainerResources) LimitsHints { | ||
return LimitsHints((*LimitRangeHints)(nil)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be simpler if we returned an object with two empty lists instead of nil.
@@ -121,7 +121,7 @@ func (vpa *Vpa) UseAggregationIfMatching(aggregationKey AggregateStateKey, aggre | |||
} | |||
} | |||
|
|||
// UsesAggregation returns true iff an aggregation with the given key contributes to the VPA. | |||
// UsesAggregation returns true if an aggregation with the given key contributes to the VPA. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a typo - iff means if and only if
var _ LimitsChecker = &limitsChecker{} | ||
|
||
// NewLimitsChecker create a LimitsChecker | ||
func NewLimitsChecker(i interface{}) LimitsChecker { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a SharedInformerFactory passed here?
@@ -75,11 +78,14 @@ func main() { | |||
vpaLister := vpa_api_util.NewAllVpasLister(vpaClient, make(chan struct{})) | |||
kubeClient := kube_client.NewForConfigOrDie(config) | |||
factory := informers.NewSharedInformerFactory(kubeClient, defaultResyncPeriod) | |||
if *allowToAdjustLimits { | |||
factoryForLimitsChecker = factory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather explicitly pass the no-op limitChecker when *allowToAdjustLimits = false
|
||
// Set limit if needed | ||
if limitsHints.RequestsExceedsRatio(i, resource) { | ||
// we need just to take care of max ratio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why?
|
||
// LimitRangeHints implements LimitsHints interface | ||
type LimitRangeHints struct { | ||
requestsExceedsRatio []map[v1.ResourceName]bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since v1.ResourceList is already a map, can we store proposedLimits map[container_name]proposed_limit_changes
Then:
requestExceedsRatio(container_name, resourceName) bool {
_, found := proposedLimits[container_name][resourceName]
return found
}
limitranges, err := lc.limitrangeLister. | ||
LimitRanges(pod.GetNamespace()). | ||
List(labels.Everything()) | ||
if err == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather fail fast:
if err != nil {
return nil
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, it would be good to log the error.
continue | ||
} | ||
foundInterstingData = true | ||
id.parse(&lri) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to have multiple items that have non nil MaxLimitRequestRatio per namespace?
@@ -0,0 +1,236 @@ | |||
/* | |||
Copyright 2018 The Kubernetes Authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: 2019
Thank you very much for this PR, it is needed functionality for VPA. I left some initial comments. My main concern is why do we only care about MaxRatio? I assume we can also easily violate the MinRatio constraint. |
Thanks for the comments, I will take these as soon as I have time |
98d2a05
to
5304a4b
Compare
49725d3
to
cb63756
Compare
To allow admission controller to set limits on containers if needed because LimitRange in namespace with default and max ratio. This feature have to be explicitly enabled by passing the flag `--allow-to-adjust-limits`
cb63756
to
8b813d5
Compare
Hi @bskiba I am sorry for the delay but was a very busy month for me.
The min ratio is 1:1 and this would make limits equals to requests would make a pod in Guaranteed (QoS). In LimitRange API there is no MinRatio property.
Yes it is possible and in this case all ratio have to be respected so I get the minimum one. For the Default, to consider the limit quantity in case on the container there is not an explicit limit, is not so clear (and I am not sure if my code is correct) because looks like that the LimitRanger set the default limit from the first LimitRange it gets the problem here is that the query using the client-go could return a different order compared to what LimitRanger get on the server-side . I didn't verify this, btw I think that is quite uncommon to have multiple LimitRange for the same resources in a namespace. |
Thanks for the changes and the explanation. It was my oversight that I din't check properly that there is no corresponding minLimit, apologies. |
I'm keeping original CL from kubernetes#1813 and applying changes requested in the review in a separate CL to keep autoship information clean.
@safanaj I believe the other PRs (based on this one) fix the original issue. I'm closing this, let me know if there is something still to be adressed. |
@bskiba: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I'm keeping original CL from kubernetes#1813 and applying changes requested in the review in a separate CL to keep autoship information clean. Conflicts because master has VPA preprocessor, resolved manually: vertical-pod-autoscaler/pkg/admission-controller/logic/server_test.go vertical-pod-autoscaler/pkg/admission-controller/main.go
I'm keeping original CL from kubernetes#1813 and applying changes requested in the review in a separate CL to keep autoship information clean. Conflicts because master has VPA preprocessor, resolved manually: vertical-pod-autoscaler/pkg/admission-controller/logic/server_test.go vertical-pod-autoscaler/pkg/admission-controller/main.go
I'm keeping original CL from kubernetes#1813 and applying changes requested in the review in a separate CL to keep autoship information clean.
#1812