-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Propagate labels and annotations from ClusterClass and Cluster topology to KCP and MD #7088
✨ Propagate labels and annotations from ClusterClass and Cluster topology to KCP and MD #7088
Conversation
|
@MaxFedotov: This issue is currently awaiting triage. If CAPI contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Welcome @MaxFedotov! |
Hi @MaxFedotov. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
e1370a1
to
66aacfb
Compare
@killianmuldoon, hi! Is there anything I need to do in order to proceed with this PR? |
I think you just need some reviews now to see if people are happy with the implementation and direction. |
@MaxFedotov I'll take a look soon |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MaxFedotov thanks for pushing forward this work, this is a great start!
I will try to figure out the second part of this effort so we are sure that with this change we are not creating a situation that creates problems to the users
@@ -101,7 +101,7 @@ type Topology struct { | |||
|
|||
// ControlPlaneTopology specifies the parameters for the control plane nodes in the cluster. | |||
type ControlPlaneTopology struct { | |||
// Metadata is the metadata applied to the machines of the ControlPlane. | |||
// Metadata is the metadata applied to ControlPlane and the machines of the ControlPlane. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still mulling about this, but if we are going to propagate labels/annotations to objects down the chain I think we should propagate labels/annotations to bootstrap and infrastructure template as well; this implies also keeping such labels and annotations in sync (TBD how given that this should work no matter is using a cluster class or not)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you consider it a part of this PR, or is it better to create a separate issue\PR to track this activity?
machineLabels := mergeMap(topologyMetadata.Labels, clusterClassMetadata.Labels) | ||
machineAnnotations := mergeMap(topologyMetadata.Annotations, clusterClassMetadata.Annotations) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should this be controlPlaneLabels and controlPlaneAnnotations, so we can compute machineLabels/machineAnnotations down in the for loop?
q: should we apply the cluster label and the topology-owned label to the CP as well (I don't remember if this is already done somewhere else)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will rename it.
Regarding labels - they were added there:
cluster-api/internal/controllers/topology/cluster/desired_state.go
Lines 733 to 740 in 66aacfb
func templateToObject(in templateToInput) (*unstructured.Unstructured, error) { | |
// NOTE: The cluster label is added at creation time so this object could be read by the ClusterTopology | |
// controller immediately after creation, even before other controllers are going to add the label (if missing). | |
labels := map[string]string{} | |
labels[clusterv1.ClusterLabelName] = in.cluster.Name | |
labels[clusterv1.ClusterTopologyOwnedLabel] = "" | |
templateLabels := mergeMap(labels, in.labels) |
But I will move all labels-related code to computeControlPlane
func, to keep all in one place
machineLabels := mergeMap(machineDeploymentTopology.Metadata.Labels, machineDeploymentBlueprint.Metadata.Labels) | ||
machineAnnotations := mergeMap(machineDeploymentTopology.Metadata.Annotations, machineDeploymentBlueprint.Metadata.Annotations) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar comments as above: machineDeploymentLabels/annotations + should we add the cluster and topology-owner annotation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will rename it too.
All code, related to cluster and topology-owned annotations is a few lines down, together with selectors:
cluster-api/internal/controllers/topology/cluster/desired_state.go
Lines 566 to 594 in 66aacfb
// Apply Labels | |
// NOTE: On top of all the labels applied to managed objects we are applying the ClusterTopologyMachineDeploymentLabel | |
// keeping track of the MachineDeployment name from the Topology; this will be used to identify the object in next reconcile loops. | |
labels := map[string]string{} | |
labels[clusterv1.ClusterLabelName] = s.Current.Cluster.Name | |
labels[clusterv1.ClusterTopologyOwnedLabel] = "" | |
labels[clusterv1.ClusterTopologyMachineDeploymentLabelName] = machineDeploymentTopology.Name | |
desiredMachineDeploymentObj.SetLabels(mergeMap(labels, machineLabels)) | |
// Apply Annotations | |
desiredMachineDeploymentObj.SetAnnotations(machineAnnotations) | |
// Set the selector with the subset of labels identifying controlled machines. | |
// NOTE: this prevents the web hook to add cluster.x-k8s.io/deployment-name label, that is | |
// redundant for managed MachineDeployments given that we already have topology.cluster.x-k8s.io/deployment-name. | |
desiredMachineDeploymentObj.Spec.Selector.MatchLabels = map[string]string{} | |
desiredMachineDeploymentObj.Spec.Selector.MatchLabels[clusterv1.ClusterLabelName] = s.Current.Cluster.Name | |
desiredMachineDeploymentObj.Spec.Selector.MatchLabels[clusterv1.ClusterTopologyOwnedLabel] = "" | |
desiredMachineDeploymentObj.Spec.Selector.MatchLabels[clusterv1.ClusterTopologyMachineDeploymentLabelName] = machineDeploymentTopology.Name | |
// Also set the labels in .spec.template.labels so that they are propagated to | |
// MachineSet.labels and MachineSet.spec.template.labels and thus to Machine.labels. | |
// Note: the labels in MachineSet are used to properly cleanup templates when the MachineSet is deleted. | |
if desiredMachineDeploymentObj.Spec.Template.Labels == nil { | |
desiredMachineDeploymentObj.Spec.Template.Labels = map[string]string{} | |
} | |
desiredMachineDeploymentObj.Spec.Template.Labels[clusterv1.ClusterLabelName] = s.Current.Cluster.Name | |
desiredMachineDeploymentObj.Spec.Template.Labels[clusterv1.ClusterTopologyOwnedLabel] = "" | |
desiredMachineDeploymentObj.Spec.Template.Labels[clusterv1.ClusterTopologyMachineDeploymentLabelName] = machineDeploymentTopology.Name |
@@ -724,14 +737,17 @@ func templateToObject(in templateToInput) (*unstructured.Unstructured, error) { | |||
labels[clusterv1.ClusterLabelName] = in.cluster.Name | |||
labels[clusterv1.ClusterTopologyOwnedLabel] = "" | |||
|
|||
templateLabels := mergeMap(labels, in.labels) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that we are accepting labels and annotations in input, I think we should drop adding labels in this func so there will be a unique point where we define all the labels for the new objects
@MaxFedotov: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@MaxFedotov: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@MaxFedotov do you have sometime to push this through? this is pretty important in our case for autoscaling :) |
We are currently waiting on the corresponding proposal: #7331 I would also like to see this merged, but we have to merge / get consensus on the proposal first. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MaxFedotov the proposal is now merged. Proposal here.
The proposal contains all the details for the questions on this PR.
Do you have bandwidth to continue working on this?
I am happy to pick up the pending work here, in case you do not have bandwidth 🙂
labels := map[string]string{} | ||
labels[clusterv1.ClusterLabelName] = in.cluster.Name | ||
labels[clusterv1.ClusterTopologyOwnedLabel] = "" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about keeping this and doing a mergeMap
with in.Labels and using that as in the GenerateTemplate
function. This way we wont have to duplicate this everywhere templateToObjects is used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MaxFedotov the proposal is now merged. Proposal here. The proposal contains all the details for the questions on this PR.
Do you have bandwidth to continue working on this? I am happy to pick up the pending work here, in case you do not have bandwidth 🙂
Hi @ykakarap! I would be grateful if you take this on. Unfortunately i don't have free time right now to continue with it :(
No problem.Thanks for letting me know. I will take it up from here. :) |
As per #7088 (comment) opened a PR #7917 to continue this work. |
@sbueringer: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What this PR does / why we need it:
It is possible to specify labels and annotations in cluster topology spec, but they are propagated only to
Machines
created byMachineDeployment
andKubeadmControlPlane
. This prevents users from setting annotations forcluster-autoscaler
in Cluster topology spec.This PR would allow to propagate labels and annotations from ClusterClass and Cluster topology spec not only to
Machines
, but also toKubeadmControlPlane
andMachineDeployments
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #7006