-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster-api takeover of existing Kubeadm clusters #7776
Comments
Related to #7573 The big issue with trying to adopt non-CAPI clusters into CAPI is trying to represent existing infrastructure components as CAPI objects. This makes the space for solving the problem in one single way very large - and ever larger as more infrastructure providers are considered. There is some prior art on how to approach this problem for a specified set of Clusters from a Kubecon talk here. Generally it's hard to conceive of a one-size-fits-all solution to this problem, but if you have some ideas I'm sure the community would love to hear them! |
Not sure if I understood your point clearly. But the approach I tried is to make CAPI join the existing non-CAPI based cluster type and then remove old infrastructure from the cluster completely. This leaves us with a cluster that has machines launched by CAPI only and the ability to manage the cluster using CAPI. I have created a draft proposal for it as well pls do take look at this if it makes sense and let me know if this is something we can take to the community and discuss further. Also here is link to demo on same topic. |
That sounds very interesting! @fabriziopandini you might be interested in this. It would be great if you could bring this solution up for discussion at a CAPI community meeting - maybe some time in the New Year. I can only imagine there'd be a lot of interest and input on this solution. |
I have been discussing this very same idea at KubeCon with some folks. @AmitSahastra do you have some idea about the UX you would like to see for this? |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
/priority backlog |
The Cluster API project currently lacks enough active contributors to adequately respond to all issues and PRs. The issue is not updated since 2022, the idea I have discussed at KubeCon + demoed at the office hours did not get traction. In case, there are talks about how some CAPI users solved this problem |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@fabriziopandini I have spent some time on this recently again with AWS Iaas/vsphere cluster and would like to get some discussion going around it. do you have some idea about the UX you would like to see for this?
Have you already made some due diligence to figure out what is required to make this happen?
I have some ideas but I need to write them down and I'm not sure when I will get to them...
I plan to record one demo on cluster takeover and share to get more feedback on it. |
@fabriziopandini Can we reopen this discussion ? Would like to request same. |
@AmitSahastra you can open a new issue describing at high level what you have in mind to try to get the discussion going again + eventually bring up the topic at the office hours. I cannot guarantee that this time the discussion will make more progress than this issue (or older issues on the same topic), but that's the way to go. WRT to my old demo, the recording is here; also https://www.youtube.com/watch?v=KzYV-fJ_wH0 might be interesting |
User Story
Today, If we want to manage clusters via cluster-api, the only way Is to create a new cluster with
clusterctl init
. But if I have an existing cluster, there is no way to manage that cluster via cluster-api.Detailed Description
With the cluster takeover option, we aim to address this use case by introducing a new cluster type or annotation that will differentiate between new cluster launches or take over cluster kind and instead of going with cluster init operation it goes for cluster join operation to put new node and eventually drain the old nodes and migrate all applications and cluster components to the new cluster.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind feature
The text was updated successfully, but these errors were encountered: