You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.kubecost.helm_release.release to include
│ new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/helm" produced an invalid new value for
│ .manifest: was
...
OMITTED, too long to paste here
...
│
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
Panic Output
Steps to Reproduce
I have seen this with Kubecost OCI chart, but might be reproducible with others
EDIT: Never mind, this label is part of my specific chart's template, it is just using .Release.Revision and helm computes the templates with the revision incremented for diffs.
I can confirm this is the case on 2.14.0, terraform 1.9.2, though I'm seeing a diff on the helm-revision label. Definitely seems to be affecting OCI charts, specifically my resource definition was:
Looking through the terraform state, it seems that on OCI charts the state with manifests stores the helm-revision label for each resource, but non-OCI charts seem not to store it, which I assume leads to the state desync.
Actually, I just tested this with the helm CLI and it seems that the issue stems from there, in a dry-run upgrade of the OCI chart, resources all have the helm-revision label (incremented from the live value by one), but the non-OCI chart does not show the helm-revision label at all. So this may actually be a core helm issue
Ive noticed this too in other truechart helm charts. They all seem to use this pattern of embedding the .Release.Revision as a pod label, making none of them compatible with manifest. I've also come across this type of pattern in other charts as well, seems to be a not terribly uncommon practice or perhaps one growing in popularity.
the kubernetes_manifest provider can have similar problems when it comes to things like labels being injected onto resources post creation and has a work around with computed fields
Perhaps the same tact could be taken here where a yq style path can be used to denote fields that will always be different so they can be ignored for purposes of computing differences.
Terraform, Provider, Kubernetes and Helm Versions
Affected Resource(s)
Terraform Configuration Files
Debug Output
NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.
Panic Output
Steps to Reproduce
I have seen this with Kubecost OCI chart, but might be reproducible with others
terraform apply
If the resource already exists, it shows this perpetual diff on every plan
and then crashes on apply with the error above
References
This got released on 2.13 and is probably related:
Community Note
The text was updated successfully, but these errors were encountered: