Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ feat: Add resource field-scoped fields #878

Closed

Conversation

TheSpiritXIII
Copy link

This feature adds resource field-scoped fields.

Why?

While working on Google Managed Service for Prometheus, we identified a need to have namespaced and cluster-scoped versions of the same structs.

How

This solution adds a new marker on fields that appear conditionally depending on the top-level CRD scope.

For example, we may have a struct:

type SecretSelector struct {
	// These fields appear regardless of scope.
	Name string  `json:"name"`
	Key string  `json:"key"`

	// This field only appears for cluster-scoped objects, since we could use the namespace of the top-level object.
	// +kubebuilder:field:scope=Cluster
	Namespace string `json:"namespace"`
}

For a namespaced CRD, the namespace field will not be generated as part of the OpenAPI specification because the namespace field is marked as being cluster-scoped.

The opposite is true as well. A field can be marked namespaced and will not be generated for cluster-scoped resources.

Defaults apply too. A CRD is considered namespaced by default, so cluster-scoped fields will not appear.

Implementation

The implementation is admittedly a bit hacky. A special property is inserted at schema parse-time, and later removed when processing the resource. I couldn't figure out a better way to do this. The comments in the code should describe why it was done this way.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jan 28, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @TheSpiritXIII!

It looks like this is your first PR to kubernetes-sigs/controller-tools 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/controller-tools has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @TheSpiritXIII. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jan 28, 2024
@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jan 28, 2024
@TheSpiritXIII TheSpiritXIII marked this pull request as ready for review January 29, 2024 00:47
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 29, 2024
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 5, 2024
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 12, 2024
Copy link

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for this one (not maintainer, but reviewed the code)

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: bwplotka, TheSpiritXIII
Once this PR has been reviewed and has the lgtm label, please assign sbueringer for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 25, 2024
@JoelSpeed
Copy link
Contributor

Not a maintainer here, but I do API reviews. In general it is recommended not to recycle structs between types unless they are closely related. For example, imagine sharing a struct for a reference type and then someone starts adding a new field to it. You are unaware of this change, but they have now added a new field to your API, not desirable.

So the use case here, is to use this for closely aligned structs, but, I'm wondering how we promote people to only use this feature for that use case 🤔

I am curious, can you expand on the need for a cluster and namespaced version of the same CRD? Does the cluster scoped version apply everywhere in the cluster and the namespaced version apply only to specific namespaces? Is the idea that different actors can configure the cluster wide policy but namespaced versions override that?

@TheSpiritXIII
Copy link
Author

I am curious, can you expand on the need for a cluster and namespaced version of the same CRD?

I work on Google Cloud Managed Service for Prometheus. It's similar to prometheus-operator but slightly more opinionated.

When you want to capture metrics for pods, you can use either PodMonitoring or ClusterPodMonitoring. Both let you specify pod selectors but the former requires a namespace and only captures pods in that namespace. In general, we recommend using PodMonitoring for proper multi-tenancy design.

Does the cluster scoped version apply everywhere in the cluster and the namespaced version apply only to specific namespaces?

Exactly.

Is the idea that different actors can configure the cluster wide policy but namespaced versions override that?

The namespace-scoped version is the more restrictive one. There's no concept of overriding in my use-case. If you have both, you get duplicate metrics.

In general it is recommended not to recycle structs between types unless they are closely related.

I totally agree! In my case, they are closely related. I have deeply nested configurations which use a common type (similar to core's SecretKeySelector) and I need different fields depending on the top-level CRD scope. Cluster-scoped resources can specify namespaces, but namespace-scoped resources cannot. This PR helps me reduce the amount of duplication I would need to do otherwise, which is quite large.

I'd love to see more opinions on this. It's possible that this particular use-case is a one-off. :)

Thanks!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants