Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support annotations on StorageClass to ignore during StorageProfile reconciliation #3166

Open
solacelost opened this issue Apr 6, 2024 · 10 comments

Comments

@solacelost
Copy link
Contributor

Is your feature request related to a problem? Please describe:
Yes. Metrics continually fire, triggering an alert, that there is an empty StorageProfile in clusters with upstream rook-ceph configured to use RGW with ObjectBucketClaims (which requires creating a StorageClass that isn't used for PVC allocation and I wouldn't try to use for a KubeVirt VM).

Describe the solution you'd like:
Instead of a compiled-in hashmap of StorageClasses that are ignored for the purpose of metrics calculation, I think it would be useful to move to an approach where either:

  • This compiled in list is complemented by annotations on StorageClasses that mark them for ignoring by CDI
  • Those StorageClasses in OCS/ODF deploy with an annotation already in place that triggers them to be ignored, and the explicit hashmap is phased out

Describe alternatives you've considered:
I currently have a silence for the alert in my OKD cluster, but it still affects the console Overview for the Kubevirt component/plugin. It's just annoying.

I also think that an optional field on the StorageProfile CRD that doesn't cause a breaking API change would work, but since the CDI owns the StorageProfile resources I'd rather not have to sideload/patch the spec if we can react to an annotation to have the "correct" behavior by default.

Additional context:
I think option 1 is fine for now and if we can get OCS/ODF to play along then a migration path to option 2 lets the old code get dropped eventually. I understand that the current state works well for RH customers, but I'd like it to be flexible and well-adjusted by default for Red Hat customers.

I can tinker some to get a PR in to support this if nobody wants to address it, but I'm not sure when I'll have the free time to test it before submitting. Might be a few weeks.

@alromeros
Copy link
Collaborator

@arnongilboa maybe #3129 already addressed this issue?

@arnongilboa
Copy link
Collaborator

@arnongilboa maybe #3129 already addressed this issue?

No, but I like the annotation suggested here.

@aglitke
Copy link
Member

aglitke commented Apr 8, 2024

@solacelost , could you share the exact provisioner strings for the relevant storage classes? In addition to adding support for an annotation, it would be nice to get these strings added to CDIs list of unsupported provisioners just to prevent others from having to deal with this for rook-ceph storage.

@solacelost
Copy link
Contributor Author

Sure, the provisioner is rook-ceph.ceph.rook.io/bucket and it uses parameters to select the CephObjectStore that the user has created, so the provisioner itself should remain static. I could PR it to the existing map if you like, or you can if you'd rather. I would still like the annotation to support flexibility, as the COSI enhancements have been languishing for some time.

@arnongilboa
Copy link
Collaborator

arnongilboa commented Apr 10, 2024

@solacelost I would add the annotation support as part of another PR I'm working on https://issues.redhat.com/browse/CNV-37744

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 9, 2024
@akalenyu
Copy link
Collaborator

@solacelost can we close this? we kind of gave up on the annotation addition, but open to other ideas

@solacelost
Copy link
Contributor Author

we kind of gave up on the annotation addition, but open to other ideas

Any particular reason? I saw some discussion that there must be some other method to determine whether the StorageClass is suitable for creating a PVC.

I believe, but don't know for sure, that that feedback is provided by the CSI provisioner, not described in the StorageClass API directly.

I think an annotation or modification of the StorageProfile spec to support manually indicating that a class is unsuitable still makes the most sense, as having CDI query the provisioner is more complex than I think it's worth.

@kubevirt-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 14, 2024
@alromeros
Copy link
Collaborator

/remove-lifecycle rotten

@kubevirt-bot kubevirt-bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants