-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rolling Update Usually results in etcd/api-server Related Downtime #9464
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1. What
kops
version are you running? The commandkops version
, will displaythis information.
Version 1.17.0
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.1.17.6
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops rolling-update cluster ${CLUSTER_NAME} --yes --force --instance-group-roles Master
5. What happened after the commands executed?
This is an HA 3-master cluster on AWS using m5.2xlarge instances with attached gp2 EBS for the etcd disks. I'm using version 3.4.3 for etcd with etcd-manager.
This isn't fully deterministic, but basically after a master gets taken down in a rolling update I often get some variation of the following.
The API Server logs some etcd related errors -- either a leader election error or a timeout error:
or
Often this causes the API server to restart in an unhealthy state:
But this seems to resolve quickly and the API server gets back to normal. However, during this time, services that rely on the kube API server get timeout errors when trying to connect to it.
Over on the etcd side, I see the usual logs around a peer member not being reachable, etc. during the time while the new peer is unavailable. However, I also see some timeout error messages in the etcd logs:
And things like this:
These, too, resolve after a few minutes and don't seem to come back.
It seems like any time a new etcd leader election goes down, we're guaranteed at least 30 seconds of downtime and some weird bootup issues. I'm not sure what to do here. A 3 node etcd cluster should survive downtime of 1 of its members, but currently if 1 of the nodes goes down, basically k8s becomes fully unavailable. This is a problem for our cluster because it relies on the k8s api server for a lot of functionality.
The text was updated successfully, but these errors were encountered: