Skip to content

Commit

Permalink
Merge pull request #24911 from roycaihw/restore-etcd
Browse files Browse the repository at this point in the history
document one should restart all system components after restoring etcd
  • Loading branch information
k8s-ci-robot authored Dec 7, 2020
2 parents 0f966a7 + c617542 commit b905af1
Showing 1 changed file with 13 additions and 0 deletions.
13 changes: 13 additions & 0 deletions content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -200,6 +200,19 @@ If the access URLs of the restored cluster is changed from the previous cluster,

If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue.

{{< note >}}
If any API servers are running in your cluster, you should not attempt to restore instances of etcd.
Instead, follow these steps to restore etcd:

- stop *all* kube-apiserver instances
- restore state in all etcd instances
- restart all kube-apiserver instances

We also recommend restarting any components (e.g. kube-scheduler, kube-controller-manager, kubelet) to ensure that they don't
rely on some stale data. Note that in practice, the restore takes a bit of time.
During the restoration, critical components will lose leader lock and restart themselves.
{{< /note >}}
## Upgrading and rolling back etcd clusters
As of Kubernetes v1.13.0, etcd2 is no longer supported as a storage backend for
Expand Down

0 comments on commit b905af1

Please sign in to comment.