-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
operator: patch back in the checksum field into the flux crds #14265
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm.
- path: patches/webhook_in_clusters.yaml | ||
- path: patches/webhook_in_redpanda_consoles.yaml | ||
- path: patches/webhook_in_cluster.redpanda.com_topics.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: This movement break the kubebuilder (I think). If in the future you would like to add new webhook configuraiton it will end up here (before +kubebuilder:scaffold:crdkustomizewebhookpatch
line).
For anyone that would look at this file it would be hard to understand if patches where added manually or by kubebuilder.
apiVersion: kustomize.config.k8s.io/v1beta1 | ||
kind: Kustomization | ||
|
||
#+kubebuilder:scaffold:crdkustomizeresource |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: The same comment as with patches. Kubebuilder can be confused if we would like to add new resource later via kubebuilder cli.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I know what happened. I did a kustomize edit fix
and it reorganized. I'll open a PR to move these back where they belong.
# and the spec.versions.v1beta2.schema.openAPIV3Schema.properties.status.properties.includedArtifacts.items.properties.checksum property | ||
# that was removed | ||
- op: add | ||
path: /spec/versions/1/schema/openAPIV3Schema/properties/status/properties/artifact/properties/checksum |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm afraid of the numbers like 0
or 1
. If we change the order we will end up in the same situation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, that's the choice the jsonpatch project made. We have no choice. Luckily, kubebuilder does build the crd with the versions in the same order. If upstream removes a version, however, that will break this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To unblock our user I'm giving LGTM, but it's hard to maintain Flux CRDs if we have different maintenance policy.
The checksum was removed from the previous definition by fluxcd/source-controller#1056. This is breaking change for the end user and we can see more of that.
You can see https://github.com/fluxcd/source-controller/pull/1056/files#r1150474148 comment that it was done by design.
/backport v23.2.x |
Yes it was, but it doesn't break anything to have a field defined in the CRD that you don't use. |
Yes, every time old version is changed we need to do double check. If customer have flux CRDs already installed in K8S cluster then we have breaking change. We are competing with upstream flux CRDs. |
Backports Required
Release Notes