-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ClusterClass and managed topologies #4430
Comments
I'm not hip with the latest on cluster auto-scalar, but it would seem natural for me to declare intent and limits at this level. Feel free to argue with me, I'm just spitballing. |
Couple of questions here:
Overall this looks a good thing to do from consumers perspective. |
The primary idea behind this proposal is to improve the UX and reusability aspect around cluster creation. Currently the Cluster CRD has too much detail around the ControlPlane and Infra definition for the cluster. Thus the cluster CRD cannot be reused to create multiple clusters of the same type (which is basically what it means by creating clusters of similar shapes) Around enhancing the UX out, the |
As an end-user, I think this looks like it could be helpful for us. We would need each role to be able to have its own infrastructure machine template and KubeadmConfigTemplate. |
I like the idea proposed, looking forward to seeing the proposal 👍🏻 One thing that comes to mind is whether the |
I'd expect at this level of consumption the user should not care. They simply want to change the version and the cluster just upgrades. |
+1 — The expectation is that MachineDeployment classes are already there in the form of infra and bootstrap templates. The |
Is ClusterClass itself immutable? As in, if I want to change a parameter in cc, should I just clone a new one and update that parameter? nvm just re-read it that it's immutable. |
That was my initial thinking, at least to start with. As soon as we get into mutable territory, things are much get much complicated. Each template reference would have to stay the same, although folks might want to mutate the templates themselves (which could be the next logical step right after the initial implementation) |
I think that from a UX perspective there two things we might want to figure out:
|
Does this proposal also aim to solve #3203? If so, how would we handle updates to the OS image in the InfraMachine template required for k8s version upgrades in a lot of scenarios? cc @fiunchinho |
assuming you are talking about the total cluster limits (eg mem, cpu, nodes, etc), i agree and i think it pushes us towards the notion of having some sort of controller (or similar) that could automate deployment of the cluster autoscaler. |
Following the discussion we had during the office hours today, handling the version directly in the cc: @CecileRobertMichon That should solve the issue of updates to OS images during k8s version upgrades |
/assign |
Makes sense for With regards to |
apiVersion: cluster.x-k8s.io/v1alpha4
kind: ClusterClass
metadata:
name: example-1
namespace: blah
spec:
infrastructure:
clusterRef:
workerNodeTypes:
- type: worker-linux
# optional
infrasturctureRef: linux-template-1
- type: worker-windows
# optional
infrasturctureRef: windows-template-1
controlPlaneRef:
# optional
infrastructureRef: linux-template-cp
kind: Cluster
metadata:
name: azure-1
namespace: eng
spec:
class:
name: "example-1"
namespace: "blah"
version: v1.19.1
template: linux-template-1
managed:
controlPlane:
replicas: 3
workers:
- role: worker-linux
replicas: 10
- role: worker-windows
replicas: 10
template: windows-template-1 The idea here is that the |
Thanks @srm09 that makes a lot of sense |
@srm09 When are the But I think it's okay for now. The idea overall is clear and we can specify the details in a proposal. |
This is interesting. Some questions to better understand goals and benefits:
related to extracting the infra and controlPlane ref into a different CRD? |
I think having something like the We have a somewhat similar approach within our company, based on a custom CRD that holds
We create a CR for a specific setup (k8s/etcd/OS versions) and link it to a Cluster using an annotation. The annotation holds the name of the CR. A change on the annotation to use a different CR may trigger an upgrade of the From this proposal I'm not sure about the |
|
@fiunchinho |
I'd really like to see the detailed proposal before I provide too much feedback, but one initial concern that I have is that there doesn't seem to be support at the Cluster level here to support an upgrade similar to what is provided by There was a discussion about adding similar support to MachineDeployments/MachineSets/MachinePools, it would be great to see it covered in this proposal as well. |
ClusterClass abstracts away not only Machines, but also control plane configuration and cluster infrastructure. We don't have support for that today.
Apologies if I didn't understand the question. The ClusterClass abstraction by itself doesn't really provide much value. The overall arch of this issue-proposal is to ease the getting started / day 0 operations and get to a Kubernetes cluster by creating the smallest amount of configuration as possible. If infrastructure provider can provide custom ClusterClass(es) in an inventory of some sorts, It's a stepping stone which is ultimately going to guide users to learn the more advanced Cluster API objects over time, rather than overwhelming new users from the start.
The top level Kubernetes version would take precedence over all the inner version fields, the objects would be set or updated appropriately.
Huge +1, potentially we should still add it to each single objects (and maybe call it |
Would this issue also be tackling the lifecycle process mentioned above in #3203 ? This would be a big step forward for us to be able to manage cluster lifecycle declaratively in git and have an operator manage the order of which KCP/MachineDeployments are upgraded for example.. also having some way to update machinetemplates without having to create new ones, then clean up after they are updated would be a big plus too. |
@smcaine Yes, in some ways it does but only for the managed topologies (control plane + workers defined as part of the Cluster object). If users create Machine, MachineSet, or MachineDeployment objects outside of that we won't be upgrading those versions. |
In the past few months we've been talking at community meetings about the possibility of adding two new concepts to our APIs: ClusterClass and a more useful Cluster object that lets folks describe a fully functional cluster.
ClusterClass
A ClusterClass CRD would be introduced to provide easy stamping of clusters of similar shapes. The early version of this Class would be containing an immutable set of references to templating objects, for example:
The above is mostly an example, details of how the reference system should work are left to the proposal.
Cluster
The Cluster object has been used to this day mostly to tie things up together in a logical composition of resources. In this proposal we want to improve the day zero user experience by providing new capabilities as part of the Cluster object. More details are left to the proposal itself, although we'd like to allow a Cluster to be created with a resource that looks as simple as the example below:
In the example above, the Cluster controller is expected to take all the information present in the ClusterClass associated (in
spec.class
) and create managed resources: a control plane with 3 replicas and one pool of worker nodes with 10 replicas.The other bit important in here is the
spec.version
field. This new field, for the first iteration, is managing the Kubernetes version for a managed cluster topology. When the version changes, for example in the event of an upgrade, the controller is in charge of declaring the new versions for both control planes and worker pools./kind proposal
/area api
/milestone v0.4
The text was updated successfully, but these errors were encountered: