-
Notifications
You must be signed in to change notification settings - Fork 769
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disabling or delaying leader elections #20
Comments
FWIW I would be willing to come up with a PR, given that we can agree on what options to expose and how. There are a wealth of options exposed by https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/blob/203b2c9cdf9c44e504af7d8a15d9df6642cd9ea5/controller/controller.go#L623-L647 , which I think is what the nfs client provisioner users. Thoughts? |
@rombert - that would be great. There was a similar functionality added to another provisioner via this PR. kubernetes-sigs/nfs-ganesha-server-and-external-provisioner#11 |
Thanks @kmova . I'll try and submit a similar PR with a single change - disabling leader election. |
I am running the
nfs-client-provisioner
in a simple self-hosted cluster. I am trying to narrow down excessive control plane disk writes, and traced lots of them back toetcd
writing to/registry/services/endpoints/FOO/cluster.local-nfs-client-nfs-client-provisioner
.I got 896 writes to that key in 30 minutes, so it looks like once very two seconds, which is quite a lot ( actually half of the current writes, after disabling leader election for
kube-scheduler
andkube-controller-manager
.I tried to find a way of disabling or tweaking the leader elections timeout but did not find a way to do so. It would be great if this could be offered for simple deployments that don't need the redundancy of multiple replicas.
The text was updated successfully, but these errors were encountered: