-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix tolerations #843
Fix tolerations #843
Conversation
@ravisantoshgudimetla: GitHub didn't allow me to request PR reviews from the following users: sjenning, derekwaynecarr. Note that only operator-framework members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@ecordell - I see that there are some hardcoding for tolerations in pkg/controller/registry/reconciler/grpc.go, do we need them to have those tolerations? |
/retest |
Just fixed several flakes in the tests and will rebase this so that it merges soon. |
/retest |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ecordell, ravisantoshgudimetla The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
In 4.1, we have taint-based evictions enabled. This changes logic in the node lifecycle controller in the kube-controller-manager significantly.
Historically, the node lifecycle controller would directly evict pods from node that had a Ready condition of False or Unknown after a pod eviction timeout set by the --pod-eviction-timeout flag on the kube-controller-manager. This setting applied to all pods cluster-wide. The default was 5m.
With taint-based evictions, all the node lifecycle controller does is taint the node with node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taint with NoExecute effect. This would normally result in the immediate eviction of all pods that don't tolerate those taints, breaking the old behavior with pod eviction timeouts. Enter DefaultTolerationSeconds mutating admission plugin.
https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds/admission.go
This plugin will add a default NoExecute toleration for node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taints with a tolerationSeconds of 5m (300s) as long as no such toleration is already specified in the pod spec. This restores the old pod eviction tiemout behavior.
One of the intended effects of this change is the make the pod eviction timeout a pod-level property. Different applications require different timeouts depending on their design and having it controlled at a cluster level before was not optimal. The side-effect is that we won't allow pods to be scheduled onto nodes that have disk, memory, cpu pressure.
The DefaultTolerationSeconds plugin has a flag that allows adjusting the defaults for tolerationSeconds
https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds/admission.go#L34-L40
We might expose this tunable for user control in the future and do not want the cluster control plane components to be subject to it.
Thus, this PR explictly defines a NoExecute toleration for node.kubernetes.io/unreachable and/or node.kubernetes.io/not-ready taints with a tolerationSeconds generically appropriate for cluster components.
Once these changes are in across all components, this e2e will enforce it in the future
openshift/origin#22752
Please refer to this doc, if you have questions around what tolerations you can have:
https://docs.google.com/document/d/1W449BfB5la9NC7pcDxkovgzlBlHMy5Lkj0-WXWjdiTU/edit#
/cc @sjenning @smarterclayton @derekwaynecarr