-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revamp controller manager integration #14
Comments
Ideally, the CRD should be versioned such that both version of the CRD can be deployed at the same time. This permits one to have the old CRs lingering in the system while they are being migrated to the new CRD format, if such an upgrade support is needed. A plan like this will require coordination with the SPIRE Controller Manager in the SPIRE project so the Controller Manager will check for all the supported CRD versions and shutdown if they are not present, and when they are present, will listen to all of the supported CRD versions. The primary reason that the CRD is not being updated according to new versions of it is likely due to the Object Identifier ( spec.versions / metadata.name / metadata.namespace ) not changing when the CRD's spec changes. As Helm checks to see if the CRD was previously deployed, it finds a CRD with the same Object Identifier present, so it determines that an upgrade is not necessary. In our scenario the old / new versions both have the same Object Identifier, but the spec sections change. I do not believe that Helm is performing a deep comparison between the two items, as the spec section quickly wraps custom object (CR) definitions, which is where the relevant changes we want to apply are located. |
When using helm3 with the crds dir, helm only looks to see if the crd's metadata.name in each object in that dir already exists, and if not, applies that crd to the cluster. Very basic logic. So any update to the objects wont be applied at all. This was by design. Details at https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations |
The easiest solution would be to move the crds out to their own chart named something like spire-crds and put them in its template directory. Everyone would install/upgrade the crds chart first, then the main spire one. (Or use the raw crds manifests instead of the crds chart). If we wanted to still have it work as just one chart to install/upgrade there are a couple options but both require helm hooks to function. so likely we'd still need the option of some users deploying two charts if they couldn't run helm hooks. The two main variants would be:
|
3 possible options
We can approach this in 3 PRs.
ArgoCD examples:Option 1 - Prometheus{{- if .Values.kubePrometheusStack.enabled -}}
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kube-prometheus-stack-crds
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "2"
argocd.argoproj.io/manifest-generate-paths: .
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
server: {{ .Values.kubePrometheusStack.destination.server }}
namespace: {{ .Values.kubePrometheusStack.destination.namespace }}
project: platform-monitoring
source:
repoURL: https://github.com/prometheus-community/helm-charts
targetRevision: kube-prometheus-stack-{{ .Values.kubePrometheusStack.source.targetRevision }}
path: charts/kube-prometheus-stack/crds/
directory:
recurse: true
syncPolicy:
syncOptions:
- CreateNamespace=true
- Replace=true
automated:
prune: true
selfHeal: true
{{- end -}}
---
{{- if .Values.kubePrometheusStack.enabled -}}
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kube-prometheus-stack
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "3"
argocd.argoproj.io/manifest-generate-paths: /charts/monitoring
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
server: {{ .Values.kubePrometheusStack.destination.server }}
namespace: {{ .Values.kubePrometheusStack.destination.namespace }}
project: platform-monitoring
source:
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: {{ .Values.kubePrometheusStack.source.targetRevision }}
helm:
skipCrds: true
values: {{ .Files.Get "config/kube-prometheus-stack-values.yaml" | toYaml | indent 6 }}
parameters: []
chart: kube-prometheus-stack
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true
selfHeal: true
ignoreDifferences:
- group: admissionregistration.k8s.io
kind: MutatingWebhookConfiguration
name: kps-admission
jqPathExpressions:
- .webhooks[] | select(.name == "prometheusrulemutate.monitoring.coreos.com") | .failurePolicy
- group: admissionregistration.k8s.io
kind: ValidatingWebhookConfiguration
name: kps-admission
jqPathExpressions:
- .webhooks[] | select(.name == "prometheusrulemutate.monitoring.coreos.com") | .failurePolicy
{{- end -}} Option 1 - KyvernoapiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kyverno
resources:
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_admissionreports.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_backgroundscanreports.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_cleanuppolicies.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_clusteradmissionreports.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_clusterbackgroundscanreports.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_clustercleanuppolicies.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_clusterpolicies.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_generaterequests.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_policies.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_policyexceptions.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/kyverno.io_updaterequests.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/wgpolicyk8s.io_clusterpolicyreports.yaml
- https://github.com/kyverno/kyverno/releases/download/v1.9.4/wgpolicyk8s.io_policyreports.yaml Would be very similar in ArgoCD except you won't recurse the files in the crds folder, but just install the Helm chart as is. All of these above examples are using a App of Apps pattern in our ArgoCD repo:
In essence all charts mentioned here are just using option 1, which gives us full control on CRDs without the need of requesting any custom approaches from the chart maintainers. My preferred choice therefore would be Option 1 to not overcomplicate things with all kind of edge cases we have to handle. |
@marcofranssen With your option one, we'd require users to kubectl apply the crds before upgrading the helm chart? |
@marcofranssen what do you think of my option 2? |
Reading through the discussion here brings up another issue for me: the actual clusterspiffeid. Should that be the separate chart? Thats what we interact with more. The CRD itself seems to me to be more in the realm of the statefulset, configmaps, etc. General plumbing we can hide. Is it possible to keep the CRD in the same chart as a template so it gets updated with everything else? And then split the clusterspiffeid definition off into a separate chart. Then we can version that, add new fields as necessary. |
Yeah. That could be done. |
@marcofranssen What do you think of my proposal? |
It sounds as an interesting approach to try. Although I don't fully understand what you are describing there, maybe you can give us a voice over. |
What would be the idea of versioning this? In the end we try to resolve a single apply time upgrade path x -> y or y -> z. Meaning after the upgrade of the chart has ran any previous versions don't matter anymore. This feels like overcomplicating what we try to achieve or maybe I'm not following completely. |
Fixed in #8 |
The current controller manager integration has a number of issues:
Original issue: spiffe/helm-charts#427
The text was updated successfully, but these errors were encountered: