Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kube-prometheus-stack] Can't upgrade CRD from 0.50.0 to 0.52.0: metadata.annotations: Too long: must have at most 262144 bytes #1500

Closed
antoineozenne opened this issue Nov 10, 2021 · 91 comments
Labels
bug Something isn't working

Comments

@antoineozenne
Copy link

Describe the bug a clear and concise description of what the bug is.

When applying the new 0.52.0 CRD, I get this result:

customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com configured
The CustomResourceDefinition "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com configured

What's your helm version?

version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}

What's your kubectl version?

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:21:44Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:16:25Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

Which chart?

kube-prometheus-stack

What's the chart version?

20.0.1

What happened?

When applying the new 0.52.0 CRD, I get this result:

customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com configured
The CustomResourceDefinition "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com configured

So I can't upgrade the CRD prometheuses.monitoring.coreos.com.

What you expected to happen?

I expected to upgrade CRD without errors.

How to reproduce it?

Starting with a 19.2.3 kube-prometheus-stack instance and 0.50.0 installed CRD, try to upgrade the chart to 20.0.1 then applying the new 0.52.0 CRD (yes, it was a mistake from myself to upgrade the chart before upgrading CRD, but I don't think this is the problem here because I read the commits and the chart seems to be backward-compatible).

Enter the changed values of values.yaml?

NONE

Enter the command that you execute and failing/misfunctioning.

kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml

Anything else we need to know?

No response

@antoineozenne antoineozenne added the bug Something isn't working label Nov 10, 2021
@agentq15
Copy link

We have the same issue

@skarj
Copy link
Contributor

skarj commented Nov 10, 2021

Probably it should be changed in the Reame to

kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml

@antoineozenne
Copy link
Author

Yes, just tried, it's OK by replacing the resources.

@irizzant
Copy link
Contributor

Replacing the CRD is a workaround, but if you try to apply again the same CRD you get the same error:

kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
Warning: resource customresourcedefinitions/prometheuses.monitoring.coreos.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
The CustomResourceDefinition "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

This is because the metadata is too long

@vojtechvelkjop
Copy link

vojtechvelkjop commented Nov 14, 2021

same issue during installation by pulumi helm or by kubectl apply -f <output from helm template>
It is strange that helm install is working.

@mprzygrodzki
Copy link

Got same issue while deploying kube-prometheus-stack via Helm by ArgoCD

CustomResourceDefinition.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

Works fine when applying via Helm from CLI.

@azerbe
Copy link

azerbe commented Nov 15, 2021

The reason is the "last applied state" metadata.annotation field. If you replace or install it, there is no previous configuration. That way it works, but if you re-apply it, the mentioned annotation is getting to big.

@vojtechvelkjop
Copy link

However mentioned workaround are not applicable for Pulumi deployment.

@mosheavni
Copy link
Contributor

Still marking the ArgoCD App with Sync Failed
It currently is not a real issue but really annoying in the eyes to see a red application.
Could this be fixed?

@irizzant
Copy link
Contributor

For ArgoCD you can add the annotation argocd.argoproj.io/sync-options: Replace=true to the CRD

@mprzygrodzki
Copy link

For ArgoCD you can add the annotation argocd.argoproj.io/sync-options: Replace=true to the CRD

I can confirm that in this way it works.

@ebcFlagman
Copy link

For ArgoCD you can add the annotation argocd.argoproj.io/sync-options: Replace=true to the CRD

@irizzant How did you achieve this? I can set it manually once before sync in the live manifest and then it works. But next time it fails again. It's not possible to add this annotation "permanently"?

@irizzant
Copy link
Contributor

I use Helm + Kustomize https://github.com/argoproj/argocd-example-apps/blob/master/plugins/kustomized-helm/README.md to render the manifest and then apply the annotation with Kustomize using patchesStrategicMerge

@mprzygrodzki
Copy link

@ebcFlagman i've added this annotation to crd template in helm chart, and problem doesn't exsist anymore.

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.6.2
    argocd.argoproj.io/sync-options: Replace=true
  creationTimestamp: null
  name: prometheuses.monitoring.coreos.com
...

@apetrovYa
Copy link

apetrovYa commented Nov 15, 2021

In case someone did the same thing I did: delete the CRD; the only way I managed to re-create, it was by issuing the below command:

kubectl apply \
-f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml \
--force-conflicts=true \
--server-side

@roeelandesman
Copy link

roeelandesman commented Nov 15, 2021

The issue with using replace on the CRD deployment instead of apply is it will break on net-new deployments. Seems to me that the only solution would be to shorten the metadata.annotation field for the CRDs upstream

  Error from server (NotFound): error when replacing "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.52/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml": customresourcedefinitions.apiextensions.k8s.io "alertmanagerconfigs.monitoring.coreos.com" not found

Edit
I found a workaround in using kubectl replace -f <resource-name> --force which uses a REST DELETE then POST instead of a PUT. See here: https://stackoverflow.com/a/62067419

@sathieu
Copy link
Contributor

sathieu commented Nov 16, 2021

I found a workaround in using kubectl replace -f --force which uses a REST DELETE then POST instead of a PUT. See here: https://stackoverflow.com/a/62067419

If you delete a CRD, you'll delete all such resources, i.e. delele all prometheus deployements.

What about:

url=https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl create -f $url || kubectl replace -f $url

sathieu added a commit to sathieu/helm-charts-prometheus-community that referenced this issue Nov 17, 2021
sathieu added a commit to sathieu/helm-charts-prometheus-community that referenced this issue Nov 17, 2021
@sathieu
Copy link
Contributor

sathieu commented Nov 17, 2021

See my proposed fix in #1510.

sathieu added a commit to sathieu/helm-charts-prometheus-community that referenced this issue Nov 17, 2021
sathieu added a commit to sathieu/helm-charts-prometheus-community that referenced this issue Nov 17, 2021
@utkuozdemir
Copy link

utkuozdemir commented Nov 19, 2021

I think replace is not a good option. The stack is used by a lot of places, installed in many different ways (kubectl, helm, gitops tools and so on). Adding an exceptional step into the installation logic will cause a lot of friction in different environments. So imo this needs to be fixed in a way so that kubectl apply -f starts working again.

@sathieu
Copy link
Contributor

sathieu commented Nov 19, 2021

@utkuozdemir The only alternative I see is shrinking the CRD itself. This needs to be done upstream.

And it looks like upstream is happy with kubectl update|create. See prometheus-operator/prometheus-operator#4349 (and related issues prometheus-operator/prometheus-operator#4367 prometheus-operator/prometheus-operator#4348 prometheus-operator/prometheus-operator#4355).

What about merging #1510 short term, and handling the shrink long-term (with tests ensuring it doesn't fail again) ?

@utkuozdemir
Copy link

@sathieu I agree, it makes sense to handle it on the chart level until upstream fixes it.

@tringuyen-yw
Copy link

@sathieu

If you delete a CRD, you'll delete all such resources, i.e. delele all prometheus deployements. What about:

url=https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl create -f $url || kubectl replace -f $url

The kubectl create will fail by "prometheuses.monitoring.coreos.com" already exists. Only the kubectl replace will execute. Here is the output when I tried your suggestion:

Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml": customresourcedefinitions.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" already exists
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com replaced

Which is simply equivalent to:

kubectl replace -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml

On my Kubernetes 1.19.3 cluster, I have successfully updated all the CRD by using the commands sugested by the kube-prometheus-stack README from-19x-to-20x

By simply replacing kubectl apply -f by kubectl replace -f

All the existing Custom Resources remain unchanged when their underlying CRD is updated.

@czhujer
Copy link

czhujer commented Feb 8, 2022

Wow! Thanks for "workaround" @gw0 :)

@lknite
Copy link

lknite commented Mar 7, 2022

Let's get someone assigned to this one.

Installing using argocd w/ bitnami kube-prometheus latest & greatest and running into this. Looks like @gw0 workaround will work well, and I'll set this up, but it is a work around because something is broken. Using latest and greatest argocd & helm chart, though using 1.18 cluster (cluster version doesn't seem to be the core issue here).

@dirien
Copy link

dirien commented Mar 8, 2022

2.3.0 of Argo CD is released, hope that the workaround of @gw0 will still work. I wait for the Helm release of ArgoCD to test out.

@reefland
Copy link

reefland commented Mar 9, 2022

Not sure if this helps, I came across this investigating a different issue, but using the --server-side flag on kubectl introduced in k8s 1.22 seems to get around the `Too long: must have at most 262144 bytes' message.

This has the problem as expected:

$ kubectl apply --dry-run=server -f bundle.yaml 
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created (server dry run)
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created (server dry run)
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created (server dry run)
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created (server dry run)
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created (server dry run)
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created (server dry run)
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created (server dry run)
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created (server dry run)
clusterrole.rbac.authorization.k8s.io/prometheus-operator created (server dry run)
deployment.apps/prometheus-operator created (server dry run)
serviceaccount/prometheus-operator created (server dry run)
service/prometheus-operator created (server dry run)
The CustomResourceDefinition "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

Adding the --server-side message goes away:

$ kubectl apply --dry-run=server -f bundle.yaml --server-side
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com serverside-applied (server dry run)
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com serverside-applied (server dry run)
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com serverside-applied (server dry run)
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com serverside-applied (server dry run)
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com serverside-applied (server dry run)
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com serverside-applied (server dry run)
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com serverside-applied (server dry run)
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com serverside-applied (server dry run)
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator serverside-applied (server dry run)
clusterrole.rbac.authorization.k8s.io/prometheus-operator serverside-applied (server dry run)
deployment.apps/prometheus-operator serverside-applied (server dry run)
serviceaccount/prometheus-operator serverside-applied (server dry run)
service/prometheus-operator serverside-applied (server dry run)

Run the kubectl apply without the --dry-run seems to work fine:

$ kubectl apply -f bundle.yaml --server-side
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com serverside-applied
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator serverside-applied
clusterrole.rbac.authorization.k8s.io/prometheus-operator serverside-applied
deployment.apps/prometheus-operator serverside-applied
serviceaccount/prometheus-operator serverside-applied
service/prometheus-operator serverside-applied

I don't understand the ramifications of using this method yet, but gets around the issue in my testing:

@cyrus-mc
Copy link

I ran into this and while there are solution specific to the tool you are using (e.g: ArgoCD) I wanted a more generic solution. In the end I just removed all the descriptions from the larger CRDs to get it under the required size. This does limit the output of kubectl explain but to me that is a small price to pay.

Simple conversion using yq: yq eval 'del(.. | .description?)' CRD_FILE

This will change creationTimestamp: null to creationTimeStamp: {} so just manually fix that and everything should work.

bilalshaikh42 added a commit to biosimulations/deployment that referenced this issue Mar 25, 2022
@orblazer
Copy link

2.3.0 of Argo CD is released, hope that the workaround of @gw0 will still work. I wait for the Helm release of ArgoCD to test out.

Hello,
I'm on v2.3.3+07ac038 and the problem is still present 😭

@dirien
Copy link

dirien commented Apr 11, 2022

2.3.0 of Argo CD is released, hope that the workaround of @gw0 will still work. I wait for the Helm release of ArgoCD to test out.

Hello, I'm on v2.3.3+07ac038 and the problem is still present 😭

Did you triy the seperate ArgoCD application? you can check the details here -> https://blog.ediri.io/kube-prometheus-stack-and-argocd-23-how-to-remove-a-workaround

@orblazer
Copy link

2.3.0 of Argo CD is released, hope that the workaround of @gw0 will still work. I wait for the Helm release of ArgoCD to test out.

Hello, I'm on v2.3.3+07ac038 and the problem is still present sob

Did you triy the seperate ArgoCD application? you can check the details here -> https://blog.ediri.io/kube-prometheus-stack-and-argocd-23-how-to-remove-a-workaround

I have try to make it with kustomized-helm plugin but i can't work with it, so i have finally make your solution.

@mateimicu
Copy link

For the Pulumi we did the following:

  1. create the CRD's manually and as @cyrus-mc mentioned remove the description from crd-prometheuses.yaml
  2. create the helm normally with two changes:
    a) depend on the crds dependsOn
    b) skip the CRD creation skipCRDRendering

A sample

const crds: k8s.apiextensions.CustomResource[] = [];
fs.readdirSync("prometheus_crd").forEach((file) => {
  const content: any = yaml.load(
    fs.readFileSync(`prometheus_crd/${file}`).toString()
  );
  if (path.extname(file) == ".yaml") {
    crds.push(
      new k8s.apiextensions.CustomResource(
        content.metadata.name,
        content as k8s.apiextensions.CustomResourceArgs
      )
    );
  }
});
new k8s.helm.v3.Chart(
  "helm-prom",
  {
    repo: "prometheus-community",
    chart: "kube-prometheus-stack",
    version: "35.2.0",
    skipCRDRendering: true,
  },
  {
    dependsOn: crds,
  }
);

@MPV
Copy link

MPV commented May 17, 2022

Would the community be willing to maintain a separate helm chart in this repo just for the CRDs?

Because while the below workaround works, it would be nice being able to let something like Renovate bump the chart version of these dependencies together.

prometheus-operator/prometheus-operator#4439 (comment) 👇

If you use ArgoCD to normally install kube-prometheus-stack it fails, because it uses kubectl apply in the background which causes this CRD too long issue. If you manually recreate the CRDs, it occasionally works, but sooner or later ArgoCD self-healing will cause issues again.

If you use ArgoCD to install with Replace=true kube-prometheus-stack it fails, because it uses kubectl recreate and PVC resources complain that they can not be mutated.

The best option is probably to split the installation of kube-prometheus-stack into two ArgoCD apps:

  • an ArgoCD app just for installing the CRDs with Replace=true directly from Git with correct tag, snippet:
    repoURL: https://github.com/prometheus-community/helm-charts.git
    path: charts/kube-prometheus-stack/crds/
    targetRevision: kube-prometheus-stack-31.0.0
    directory:
      recurse: true
    syncOptions:
    - Replace=true
  • an ArgoCD app that installs the Helm chart with skipCrds=true (new feature in ArgoCD 2.3.0), snippet:
    repoURL: https://prometheus-community.github.io/helm-charts
    chart: kube-prometheus-stack
    targetRevision: 31.0.0
    helm:
      releaseName: ...
      skipCrds: true
      values: |
        ...

@samox73
Copy link

samox73 commented May 20, 2022

For the Pulumi we did the following:

1. create the CRD's manually and as @cyrus-mc mentioned remove the `description` from `crd-prometheuses.yaml`

2. create the helm normally with two changes:
   a) depend on the crds `dependsOn`
   b) skip the CRD creation  `skipCRDRendering`

A sample

const crds: k8s.apiextensions.CustomResource[] = [];
fs.readdirSync("prometheus_crd").forEach((file) => {
  const content: any = yaml.load(
    fs.readFileSync(`prometheus_crd/${file}`).toString()
  );
  if (path.extname(file) == ".yaml") {
    crds.push(
      new k8s.apiextensions.CustomResource(
        content.metadata.name,
        content as k8s.apiextensions.CustomResourceArgs
      )
    );
  }
});
new k8s.helm.v3.Chart(
  "helm-prom",
  {
    repo: "prometheus-community",
    chart: "kube-prometheus-stack",
    version: "35.2.0",
    skipCRDRendering: true,
  },
  {
    dependsOn: crds,
  }
);

thanks so much for this workaround, worked like a charm!

@brsolomon-deloitte
Copy link

Posting this solution for those using ArgoCD. It requires some manually clicking around so goes against GitOps, but it will solve the SyncFailed status.

  1. Sync kube-prometheus-stack app
  2. Wait for it to churn around for a while and display Sync Failed
  3. On the 'Sync Failed' icon, find the resources that are failing. This should be a CRD, namely prometheuses.monitoring.coreos.com. Find this resource in the UI. (It may show Synced; proceed with the next step anyway.)
  4. Check on 'Replace' and click Sync
  5. Status of the app should now be 'Sync OK'

@MPV
Copy link

MPV commented May 23, 2022

Would the community be willing to maintain a separate helm chart in this repo just for the CRDs?

Because while the below workaround works, it would be nice being able to let something like Renovate bump the chart version of these dependencies together.

prometheus-operator/prometheus-operator#4439 (comment) 👇

Another alternative maybe could be to have a values-setting for only deploying the CRDs (i.e. not including any sub charts nor files from the templates dir)?

@amatsumara
Copy link

For pulumi I was able to update CRDs with enableReplaceCRD https://www.pulumi.com/registry/packages/kubernetes/api-docs/provider/#enablereplacecrd_nodejs

@tschifftner
Copy link

Improved workflow mentioned by @cyrus-mc

function cleanupManifest {
    $(which yq) --inplace 'del(.. | .description?)' "$1"
    $(which yq) --inplace '.metadata.creationTimestamp = null' "$1"
}

cleanupManifest crd-alertmanagerconfigs.yaml
cleanupManifest crd-alertmanagers.yaml
...

@holooloo

This comment was marked as spam.

@alexmeise
Copy link

for people applying it manually or via kustomize... instead of:

k apply -f 0prometheusCustomResourceDefinition.yaml

resuling in:

The CustomResourceDefinition "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

try:

k create -f 0prometheusCustomResourceDefinition.yaml

resulting in:

customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created

@monotek
Copy link
Member

monotek commented Jun 16, 2022

The solution was already posted here:

#1500 (comment)

export PROMETEHUS_VERSION="v0.58.0"
kubectl apply \
-f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/${PROMETEHUS_VERSION}/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml --force-conflicts=true --server-side

It's also described like this in the readme of the chart:
https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md#from-35x-to-36x

Closing & locking, to reduce noise.

@monotek monotek closed this as completed Jun 16, 2022
@prometheus-community prometheus-community locked and limited conversation to collaborators Jun 16, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet