-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ETCD snapshot/restore support #7796
Comments
Thanks @musaprg - this is a really interesting idea IMO. I think it go a long way toward helping out in disaster recovery scenarios. A couple of questions:
It would be great to get doc together with answers to some of these questions and, an overall problem statement and a start on implementation details to get people who would be interested in working on this feature involved. |
@killianmuldoon Thank you for asking. I'd like to prepare more detailed documents for this topic, but first, let me answer my opinions on the questions briefly.
I'm currently considering only workload clusters.
I don't know if I could understand correctly the meaning of "standalone etcd", but I'm assuming etcd nodes that CAPI knows. IMO it doesn't matter how etcd nodes are running if they can be accessible from the management cluster.
It could have several options, but I don't think it's required. IMO possible destinations could be the following ones.
|
I don't think it would be better to depend on something outside the Kubernetes ecosystem, so persistent volume would be the better candidate for the destination. |
/triage accepted |
/assign |
It'd be interesting to explore having these ability decoupled from KubeadmControlPlane or composable, so more control plane implementations could share it. |
Same problem here... It would be nice if cluster-api would provide a native feature for etcd snapshots (Which in my opinion is absolutely crucial! This is why I can't believe this was never thought of...) We actually tried to implement it ourselves, but weren't very successful, since the etcd pod lacks really everything (We can do a etcdctl snapshot save in the pod, but can't obtain the file since kubectl cp complains about missing tar ...). I couldn't find any helpful documentation on the topic, either. Is there something existing? There has to be a way to get snapshots from that pods, I guess... |
Yeah, that broke a while back. I've been working around it by snapshotting to a host mount, then pulling it off the host. Less then ideal... |
We currently create another pod that uses an image with required utilities (etcdctl, aws-cli, etc.) to create/upload snapshots into S3-compatible storages apart from etcd pods. It could be much easier than pulling it from the pod's local storage, but yes I'm thinking it would be nice if CAPI etcd snapshot support doesn't require any external storages. |
Adding kind/proposal because I think we should figure out if and how to make this work with different bootstrap/control-plane providers (who are ultimately responsible for defining how etcd and api server are run) and/or if and how to make this work with different types of storage (which can or cannot be related to the infrastructure provider in use). /kind proposal |
/priority backlog |
The Cluster API project currently lacks enough active contributors to adequately respond to all issues and PRs. Also most probably this should fall under SIG etcd / the ongoing discussion about a community maintained etcd-operator /close |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
User Story
As an operator, I'd like to maintain the etcd snapshot/restore functionalities with Cluster API (KubeadmControlPlane).
Detailed Description
The etcd snapshot and restore are usually crucial for administrators. We can achieve those functionalities by using community-provided operators (e.g.,
etcd-operator
, which is already archived though...) oretcdctl
directly. However, sometimes restore tasks should be considered as one of the cluster lifecycles since it requires to stop/start kube-apiserver before/after restoring. It would be nice to provide the etcd snapshot/restore functionalities by the CAPI side so that we can easily maintain them.(I couldn't find any discussions related to this except for #7399, so I filed this topic as a new issue. Please let me know if there are any places where we already have this kind of discussion.)
Anything else you would like to add:
(TBD)
Related Issues/PRs
/kind feature
The text was updated successfully, but these errors were encountered: