-
Notifications
You must be signed in to change notification settings - Fork 672
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single node clusters support #1347
Comments
/assign |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
any movement on this? it would be great to have! |
This issue becomes even more annoying when switching from cronjob to deployment workload. Since the rescheduled aborts, this makes the deployment pod crash-loop. I suggest adding an option to enable support for single-node clusters. The default can still be to abort in single-node clusters to give users more awareness of the additional disturbance running descheduler with just a single cluster node. |
At work, I scale down replicas down to 1/2 on many of our apps during night times, which reduced down the number of nodes needed to 1 and cause the same error as above. It would be good if some sort of flag such as single node suppress such error |
I would like to use descheduler in a single node kubernetes cluster.
It is useful to me to automatically delete pods with too many restart or failed pods.
It is possible to overrride this check in any way?
if len(nodes) <= 1 { klog.V(1).InfoS("The cluster size is 0 or 1 meaning eviction causes service disruption or degradation. So aborting..") return fmt.Errorf("the cluster size is 0 or 1") }
The text was updated successfully, but these errors were encountered: