-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kube-prometheus-stack] Can't upgrade CRD from 0.50.0 to 0.52.0: metadata.annotations: Too long: must have at most 262144 bytes #1500
Comments
We have the same issue |
Probably it should be changed in the Reame to
|
Yes, just tried, it's OK by replacing the resources. |
Replacing the CRD is a workaround, but if you try to apply again the same CRD you get the same error:
This is because the metadata is too long |
same issue during installation by pulumi helm or by kubectl apply -f <output from helm template> |
Got same issue while deploying kube-prometheus-stack via Helm by ArgoCD
Works fine when applying via Helm from CLI. |
The reason is the "last applied state" metadata.annotation field. If you replace or install it, there is no previous configuration. That way it works, but if you re-apply it, the mentioned annotation is getting to big. |
However mentioned workaround are not applicable for Pulumi deployment. |
Still marking the ArgoCD App with |
For ArgoCD you can add the annotation |
I can confirm that in this way it works. |
@irizzant How did you achieve this? I can set it manually once before sync in the live manifest and then it works. But next time it fails again. It's not possible to add this annotation "permanently"? |
I use Helm + Kustomize https://github.com/argoproj/argocd-example-apps/blob/master/plugins/kustomized-helm/README.md to render the manifest and then apply the annotation with Kustomize using |
@ebcFlagman i've added this annotation to crd template in helm chart, and problem doesn't exsist anymore.
|
In case someone did the same thing I did: delete the CRD; the only way I managed to re-create, it was by issuing the below command: kubectl apply \
-f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml \
--force-conflicts=true \
--server-side |
The issue with using
Edit |
If you delete a CRD, you'll delete all such resources, i.e. delele all prometheus deployements. What about: url=https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl create -f $url || kubectl replace -f $url |
Fixes prometheus-community#1500 Signed-off-by: Mathieu Parent <[email protected]>
See my proposed fix in #1510. |
Fixes prometheus-community#1500 Signed-off-by: Mathieu Parent <[email protected]>
Fixes prometheus-community#1500 Signed-off-by: Mathieu Parent <[email protected]>
I think replace is not a good option. The stack is used by a lot of places, installed in many different ways (kubectl, helm, gitops tools and so on). Adding an exceptional step into the installation logic will cause a lot of friction in different environments. So imo this needs to be fixed in a way so that |
@utkuozdemir The only alternative I see is shrinking the CRD itself. This needs to be done upstream. And it looks like upstream is happy with What about merging #1510 short term, and handling the shrink long-term (with tests ensuring it doesn't fail again) ? |
@sathieu I agree, it makes sense to handle it on the chart level until upstream fixes it. |
url=https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.52.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl create -f $url || kubectl replace -f $url The
Which is simply equivalent to:
On my Kubernetes 1.19.3 cluster, I have successfully updated all the CRD by using the commands sugested by the kube-prometheus-stack README from-19x-to-20x By simply replacing All the existing Custom Resources remain unchanged when their underlying CRD is updated. |
Wow! Thanks for "workaround" @gw0 :) |
Let's get someone assigned to this one. Installing using argocd w/ bitnami kube-prometheus latest & greatest and running into this. Looks like @gw0 workaround will work well, and I'll set this up, but it is a work around because something is broken. Using latest and greatest argocd & helm chart, though using 1.18 cluster (cluster version doesn't seem to be the core issue here). |
2.3.0 of Argo CD is released, hope that the workaround of @gw0 will still work. I wait for the Helm release of ArgoCD to test out. |
Not sure if this helps, I came across this investigating a different issue, but using the This has the problem as expected:
Adding the
Run the
I don't understand the ramifications of using this method yet, but gets around the issue in my testing: |
I ran into this and while there are solution specific to the tool you are using (e.g: ArgoCD) I wanted a more generic solution. In the end I just removed all the Simple conversion using This will change |
Hello, |
Did you triy the seperate ArgoCD application? you can check the details here -> https://blog.ediri.io/kube-prometheus-stack-and-argocd-23-how-to-remove-a-workaround |
I have try to make it with |
For the Pulumi we did the following:
A sample const crds: k8s.apiextensions.CustomResource[] = [];
fs.readdirSync("prometheus_crd").forEach((file) => {
const content: any = yaml.load(
fs.readFileSync(`prometheus_crd/${file}`).toString()
);
if (path.extname(file) == ".yaml") {
crds.push(
new k8s.apiextensions.CustomResource(
content.metadata.name,
content as k8s.apiextensions.CustomResourceArgs
)
);
}
});
new k8s.helm.v3.Chart(
"helm-prom",
{
repo: "prometheus-community",
chart: "kube-prometheus-stack",
version: "35.2.0",
skipCRDRendering: true,
},
{
dependsOn: crds,
}
); |
Would the community be willing to maintain a separate helm chart in this repo just for the CRDs? Because while the below workaround works, it would be nice being able to let something like Renovate bump the chart version of these dependencies together. prometheus-operator/prometheus-operator#4439 (comment) 👇
|
thanks so much for this workaround, worked like a charm! |
Posting this solution for those using ArgoCD. It requires some manually clicking around so goes against GitOps, but it will solve the SyncFailed status.
|
Another alternative maybe could be to have a values-setting for only deploying the CRDs (i.e. not including any sub charts nor files from the templates dir)? |
For pulumi I was able to update CRDs with |
Improved workflow mentioned by @cyrus-mc function cleanupManifest {
$(which yq) --inplace 'del(.. | .description?)' "$1"
$(which yq) --inplace '.metadata.creationTimestamp = null' "$1"
}
cleanupManifest crd-alertmanagerconfigs.yaml
cleanupManifest crd-alertmanagers.yaml
... |
This comment was marked as spam.
This comment was marked as spam.
for people applying it manually or via kustomize... instead of:
resuling in:
try:
resulting in:
|
The solution was already posted here:
It's also described like this in the readme of the chart: Closing & locking, to reduce noise. |
Describe the bug a clear and concise description of what the bug is.
When applying the new 0.52.0 CRD, I get this result:
What's your helm version?
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
What's your kubectl version?
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:21:44Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:16:25Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Which chart?
kube-prometheus-stack
What's the chart version?
20.0.1
What happened?
When applying the new 0.52.0 CRD, I get this result:
So I can't upgrade the CRD
prometheuses.monitoring.coreos.com
.What you expected to happen?
I expected to upgrade CRD without errors.
How to reproduce it?
Starting with a 19.2.3 kube-prometheus-stack instance and 0.50.0 installed CRD, try to upgrade the chart to 20.0.1 then applying the new 0.52.0 CRD (yes, it was a mistake from myself to upgrade the chart before upgrading CRD, but I don't think this is the problem here because I read the commits and the chart seems to be backward-compatible).
Enter the changed values of values.yaml?
NONE
Enter the command that you execute and failing/misfunctioning.
Anything else we need to know?
No response
The text was updated successfully, but these errors were encountered: