-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single node cluster (controlplane) in-place upgrade #7415
Comments
Thanks for creating this @furkatgofurov7! Let's try to define and separate things into different areas: Re: Single server, as per your use case let's call this "single Node cluster". A single server could be a ControlPlane implementation that runs the control plane as pods in a management cluster or such but does not necessarily expose a Node of any kind. However in your description seems that the infra running the kas operates as Node itself.
Starting with workers seems reasonable to get things going. However we should be able to eventually come up with a proposal where controlPlane implementations could take advantage of a common in-place upgrade logic. Could you add your user story to the gdoc so we keep collecting info there? |
Thanks for quick reply @enxebre
Yes, it is the baremetal under the hood backing up the node itself. Edit: I have changed the title of the issue to suit this use case better, thanks for the suggestion
Absolutely, will add that as a separate use case to the user story in the proposal
Agree, having control plane implementation needs in mind during the worker-in-place design makes sense to me. |
Has anyone demonstrated that an in-place upgrade of a single control plane is even possible in a way that is supported by upstream? (For example, node drain is a required step of an upgrade. Is that a problem?) Before we ask Cluster API to support this use case, I think we either have to demonstrate that it is possible, or understand (and address) the upstream issues that make it impossible. |
/triage accepted |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
/priority backlog |
@g-gaston |
Yeah, it is! |
/triage accepted |
User Story
As a developer/user/operator I have a use case where single-server installations are desirable (i.e edge, base stations and small regional clouds). To clarify, by single-server, we mean using a single Kubernetes control plane node in the cluster (no worker nodes), which runs workloads on them (properly tainted).
Then, I would like to perform an in-place (an upgrade strategy that provides the possibility to upgrade a node in place without the current OR old node being removed or any new node being created) upgrade of that single control plane node.
Detailed Description
The ultimate goal is, we would like to know what problems/challenges will be faced during the in-place upgrade process of single-server installation. We can assume that service interruption is okay, meaning workloads running on the control plane being down is bearable. However, we assume that removing the one and only control plane node from the cluster during the upgrade will break the whole process since subsequent commands toward the API server will be failing.
Anything else you would like to add:
In-place upgrades have been discussed in the past and even the draft proposal(https://docs.google.com/document/d/1odiy0k_KZngdhidN_ll9Mb8WgGUR9iMFU7NfYRZKCvA/edit?pli=1) is up. However, seems like it does cover only the upgrade of the worker nodes (honoring maxUnavailable)?
In any case, we would like to know if our use case can be considered to be covered in the same proposal or maybe there are already possible ways (I have really low hopes on that) we can achieve the above-provided use case with the current state of the Cluster API
/kind feature
#7415 (comment)
The text was updated successfully, but these errors were encountered: