Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce scope of default tolerations #363

Closed
wongma7 opened this issue Mar 16, 2021 · 5 comments
Closed

Reduce scope of default tolerations #363

wongma7 opened this issue Mar 16, 2021 · 5 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@wongma7
Copy link
Contributor

wongma7 commented Mar 16, 2021

/kind bug

What happened? Exists is too much, we should tone it down in next helm chart release: kubernetes-sigs/aws-ebs-csi-driver#758 (comment)

What you expected to happen?

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version):
  • Driver version:
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 16, 2021
@wongma7 wongma7 self-assigned this Mar 16, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2021
@wongma7
Copy link
Contributor Author

wongma7 commented Jun 14, 2021

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2021
@sziegler-skyhook
Copy link

sziegler-skyhook commented Aug 12, 2021

This is a major issue that prevents the EFS CSI pods from being evicted when a node is scaled down by the Kubernetes cluster-autoscaler.

Allowing the pod to tolerate all taints seems to be in direct opposition to the way NotReady and NoExecute taints are designed in Kubernetes and effects the operation of other Kubernetes system pods.

@sziegler-skyhook
Copy link

sziegler-skyhook commented Aug 12, 2021

It looks like the default toleration was removed from controller-deployment.yaml as part of release v1.3.1 - this issue can likely be closed.

494d75e#diff-5d40e4554aa98fe9c294f9ad03acc2e515a612f31678f3c2282d6c8f19415b51

@wongma7
Copy link
Contributor Author

wongma7 commented Aug 12, 2021

yes it should be fixed by latest driver + helm chart

@wongma7 wongma7 closed this as completed Aug 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

4 participants