Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consistent metadata propagation behaviour #5240

Closed
sbueringer opened this issue Sep 15, 2021 · 34 comments · Fixed by #5365 or #6935
Closed

Consistent metadata propagation behaviour #5240

sbueringer opened this issue Sep 15, 2021 · 34 comments · Fixed by #5365 or #6935
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@sbueringer
Copy link
Member

sbueringer commented Sep 15, 2021

I'm not entirely sure how it should be, but let's discuss in this issue how we want metadata to propagate and then adjust the implementation accordingly.

Right now the metadata is propagated as follows: ( from => to)

KubeadmControlPlane

.labels =>
.annotations =>
.spec.machineTemplate.metadata.labels => Machine.labels, InfraMachine.labels, KubeadmConfig.labels
.spec.machineTemplate.metadata.annotations => Machine.annotations, InfraMachine.annotations, KubeadmConfig.annotations

MachineDeployment

.labels =>
.annotations => MS.annotations
.spec.template.metadata.labels => MS.labels, MS.spec.template.metadata.labels
.spec.template.metadata.annotations => MS.spec.template.metadata.annotations

MachineSet

.labels =>
.annotations =>
.spec.template.metadata.labels => Machine.labels, InfraMachine.labels, KubeadmConfig.labels
.spec.template.metadata.annotations => Machine.annotations, InfraMachine.annotations

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 15, 2021
@vincepri
Copy link
Member

/milestone Next

Is this a bug?

@k8s-ci-robot k8s-ci-robot added this to the Next milestone Sep 30, 2021
@vincepri
Copy link
Member

/priority important-soon

@k8s-ci-robot k8s-ci-robot added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Sep 30, 2021
@sbueringer
Copy link
Member Author

sbueringer commented Sep 30, 2021

/milestone Next

Is this a bug?

@vincepri Depends on what we want to behavior to be. I'm not sure what we want but the current behaviour seems to be a bit inconsistent. I'm not sure if the expected behavior is specified somewhere.

@schrej
Copy link
Member

schrej commented Sep 30, 2021

We're relying on the .spec.template.metadata for MachineDeployment -> MachineSet -> Machine/InfraMachine and KubeadmControlPlane -> Machine/InfraMachine, so we would definitely like to keep that behavior.

In my opinion it would also be good to extend MachineDeployment -> MachineSet to also copy over labels. Otherwise there is no way to set labels on those MachineSets when using Deployments.

cc @MaxRink

@sbueringer I think DockerMachine should be InfraMachine for the KubeadmControlPlane, shouldn't it?

@sbueringer
Copy link
Member Author

sbueringer commented Sep 30, 2021

@sbueringer I think DockerMachine should be InfraMachine for the KubeadmControlPlane, shouldn't it?
Yes, thx, fixed.

My opinion on the situation:

KCP => lgtm

MD.annotations => MS.annotations

I think it's strange to propagate top-level MD.annotations to MS.annotations

MD.spec.template.metadata.labels => MS.labels, MS.spec.template.metadata.labels
MD.spec.template.metadata.annotations => MS.spec.template.metadata.annotations

It think we should also propagate MD.spec.template.metadata.annotations to MS.annotations

(I would rather propagate MD.spec.template.metadata.{labels,annotations} then top-level MD.{labels.annotations}. I don't think top-level metadata should be propagated at all)

MS.spec.template.metadata.labels => Machine.labels, InfraMachine.labels, KubeadmConfig.labels
MS.spec.template.metadata.annotations => Machine.annotations, InfraMachine.annotations

Seems more consistent to me to also propagate MS.spec.template.metadata.annotations to KubeadmConfig.annotations

@schrej
Copy link
Member

schrej commented Sep 30, 2021

It think we should also propagate MD.spec.template.metadata.annotations to MS.annotations
Seems more consistent to me to also propagate MS.spec.template.metadata.annotations to KubeadmConfig.annotations

I think we should always either propagate both, annotations and labels, or neither. It's not very intuitive (nor logical) if only one is propagated.

@sbueringer
Copy link
Member Author

sbueringer commented Sep 30, 2021

It think we should also propagate MD.spec.template.metadata.annotations to MS.annotations
Seems more consistent to me to also propagate MS.spec.template.metadata.annotations to KubeadmConfig.annotations

I think we should always either propagate both, annotations and labels, or neither. It's not very intuitive (nor logical) if only one is propagated.

100% agree. Basically my comment comes down to: "let's always propagate both and let's stop propagating top-level MD annotations"

@randomvariable
Copy link
Member

randomvariable commented Sep 30, 2021

I agree with sorting this out. Me and @sedefsavas were looking at this the other day and were pretty confused about what gets propagated and doesn't.

@fabriziopandini
Copy link
Member

+1 for consistency
however, for MachineDeployments IMO top level label/annotations should go to MS, template label/annotations should go to machines.

@randomvariable
Copy link
Member

Discussing with @schrej , @vincepri and @sbueringer in Slack, the conclusions we've reached is that:

MDs should behave like appsv1/Deployment
ReplicaSets and KCP should behave like ReplicaSets (or appsv1/StatefulSet in the case of KCP).

In which case, the behaviour is, where d = appsv1/deployment and r = replicaset

d.annotations => r.annotations
d.labels => nil
d.spec.template.annotations => d.spec.template.annotations
d.spec.template.labels => r.labels & r.spec.template.labels

This is in https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/sync.go#L146-L150 and https://github.com/kubernetes/kubernetes/blob/38746752af6d0ec955f4511a727da37347da3b92/pkg/controller/deployment/util/deployment_util.go#L232 with a comment here: kubernetes/kubernetes#92896 (comment)

From this, it would appear the outstanding task is to propagate MS.spec.template.metadata.annotations to KubeadmConfig.meta.annotations if we want to be consistent.

@vincepri
Copy link
Member

/milestone v1.0

@k8s-ci-robot k8s-ci-robot modified the milestones: Next, v1.0 Sep 30, 2021
@randomvariable
Copy link
Member

There is an outstanding question about InfraMachineTemplate.Spec.Template.Meta and KubeadmConfigTemplate.Spec.Template.Meta . Should all providers implement this, what is the merge behaviour, and who should when there's conflict?

@sbueringer
Copy link
Member Author

sbueringer commented Sep 30, 2021

/reopen

The current implementation just takes everything in .spec.template of InfraMachineTemplate.Spec.Template.Meta and KubeadmConfigTemplate.Spec.Template.Meta and overwrites it with propagated metadata:

func GenerateTemplate(in *GenerateTemplateInput) (*unstructured.Unstructured, error) {

Neither KubeadmConfigTemplate nor DockerMachineTemplate have .Spec.Template.Meta fields

@k8s-ci-robot k8s-ci-robot reopened this Sep 30, 2021
@k8s-ci-robot
Copy link
Contributor

@sbueringer: Reopened this issue.

In response to this:

/reopen

The current implementation just takes everything in .spec.template of InfraMachineTemplate.Spec.Template.Meta and KubeadmConfigTemplate.Spec.Template.Meta and overwrites it with propagated metadata:

func GenerateTemplate(in *GenerateTemplateInput) (*unstructured.Unstructured, error) {

Neither KubeadmConfigTemplate nor DockerMachineTemplate has .Spec.Template.Meta fields

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@randomvariable
Copy link
Member

The same is true for the other infra providers, so we should add something to the v1beta1 docs?

@sbueringer
Copy link
Member Author

The same is true for the other infra providers, so we should add something to the v1beta1 docs?

In wonder if/where we have guidance how the templates should be implemented.

@randomvariable
Copy link
Member

In wonder if/where we have guidance how the templates should be implemented.

I'll have a look and knock something up.

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 5, 2022
@killianmuldoon
Copy link
Contributor

/assign

I'll take a look at closing out the last couple of issues related to this.

pydctw pushed a commit to pydctw/cluster-api-provider-gcp that referenced this issue Mar 21, 2022
v1alpha4 <-> v1beta1 conversions were failing and never tested.

As part of the PR
- Add a missing gcpclustertemplate_conversion.go file
- Add conversion test to make sure conversions work
- Add ObjectMeta to GCPClusterTemplateResource and GCPMachineTemplateResource.
  See kubernetes-sigs/cluster-api#5240
pydctw pushed a commit to pydctw/cluster-api-provider-gcp that referenced this issue Mar 21, 2022
v1alpha4 <-> v1beta1 conversions were failing and never tested.

As part of the PR
- Add a missing gcpclustertemplate_conversion.go file
- Add conversion test to make sure conversions work
- Add ObjectMeta to GCPClusterTemplateResource and GCPMachineTemplateResource.
  See kubernetes-sigs/cluster-api#5240
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sbueringer
Copy link
Member Author

/reopen
/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot reopened this Apr 6, 2022
@k8s-ci-robot
Copy link
Contributor

@sbueringer: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 6, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 5, 2022
@killianmuldoon
Copy link
Contributor

/remove-lifecycle-state

@schrej
Copy link
Member

schrej commented Jul 20, 2022

/reopen
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot reopened this Jul 20, 2022
@k8s-ci-robot
Copy link
Contributor

@schrej: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle stale

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 20, 2022
@killianmuldoon
Copy link
Contributor

@schrej Is there something specific outstanding on this issue we should tackle?

@schrej
Copy link
Member

schrej commented Jul 20, 2022

Oh sorry, just saw your misspelled lifecycle command and missed the PR.

/close

@k8s-ci-robot
Copy link
Contributor

@schrej: Closing this issue.

In response to this:

Oh sorry, just saw your misspelled lifecycle command and missed the PR.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@killianmuldoon
Copy link
Contributor

I didn't even notice that until now 😆

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
8 participants