-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resources are sometimes manipulated with the wrong API group #6220
Comments
@pjestin-sym Thanks for raising this issue! I'm not sure what may be causing this. Would you be able to share either your operator or an operator that reliably reproduces this issue? |
Hello @everettraven thanks for your reply and happy new year! I was able to reproduce this issue on a fresh Helm operator with its chart. Here is the repo with instructions: https://github.com/pjestin-sym/api-group-issue-helm-op The issue seems to be directly linked to the number of resources in the chart, and possibly also to the number of CRs in the cluster (and hence the number of Helm installations). I hope you can have a look. |
Note that the value of
This seems to point towards conflicts between the operator workers. |
I was playing around with operator-sdk versions, and realized that with version 1.16.0, the problem is there, but to a much lesser extent. I was able to compare versions 1.16.0 and 1.17.0: with version 1.17.0, the amount of errors is about 20 times the amount of errors in version 1.16.0 (with the same settings otherwise). As a result, the rate of reconciliation is doubled by switching to version 1.16.0. By locally building the Docker image for helm-operator, I was able to determine that this worsening of the problem is caused by this PR: #5505 This PR updates many dependencies, so I can conclude that one of those dependency bumps is responsible for this issue worsening. My intuition is pointing towards this change in controller-runtime, even if I was not able to test that hypothesis: kubernetes-sigs/controller-runtime#1695 |
@pjestin-sym thanks for your analysis! I apologize for my delay in getting around to investigating this further, I just haven't had the time to take a deeper look. I am planning to carve out some time over the next couple days to take a deeper dive into this and some other open issues. I appreciate your patience with this! |
So I spent some time doing some digging and was able to dig down to the line that is reporting the error being: operator-sdk/internal/helm/release/manager.go Line 262 in a5d933b
Doing some looking at the surrounding context it seems the problem has something to do with the "helper" that is being used and configured for retrieving resources during release reconciliation: operator-sdk/internal/helm/release/manager.go Lines 254 to 255 in a5d933b
This helper comes from https://pkg.go.dev/k8s.io/cli-runtime/pkg/resource and for some reason seems to be mucking up the GVK occasionally when processing the requests. I'm too familiar with the internals of the Helm controller and am not really sure where this processing of the GVK is going wrong such that it is causing this error (I tried digging into this a bit, but just didn't see anything that would mess up the GVK). @varshaprasad96 Since you have more knowledge on the Helm controller itself, do you happen to have some ideas as to why this helper may be setting the wrong GVK when trying to get a resource? |
Hi @varshaprasad96 @everettraven any news on this topic? We have reverted to version 1.16.0 for now, but this issue prevents us from using newer versions. |
@everettraven I have spent all morning on this as well, and also narrowed it down to that call to |
Digging a tiny bit deeper I was able to find that the kube client is a helm one and not a client-go one like I was originally thinking and is defined here: https://pkg.go.dev/helm.sh/helm/v3/pkg/kube#Interface The Build() function that is used returns https://pkg.go.dev/helm.sh/helm/v3/pkg/kube#ResourceList The resource.NewHelper takes in a RESTMapping that is being set by the resource.Info that is retrieved by the Build() function and then sets the resource based on that: https://github.com/kubernetes/cli-runtime/blob/bfd3c43351c9870acafbfd30a6ed6f1a52b25bad/pkg/resource/helper.go#L64 It seems like maybe this could be a helm issue? That being said, I'm not super familiar with this low-level of the helm operator interactions so I think I am going to bring this back up during our community issue triage meeting and see if anyone else may have some additional insight. |
So, I was poking around into this by adding log statements to the Helm Operator to try and figure out if the mapping was wrong or what, and I can't get it to exhibit this behavior on master. Is it possible this got fixed by a dependency bump or something? |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Bug Report
I have a Helm operator that installs releases in multiple namespaces in my K8s cluster. It is working mostly fine, however sometimes, seemingly at random, the release fails. I can see that the operator logged the error below.
It seems that the operator is trying to get the correct resource, but from the wrong API group. I don't know how it could happen, but it seems it is sometimes confusing API groups between resources.
In the example below, the Helm chart that is getting installed has only 2 resources:
Deployment
in API groupapps
ConfigMap
in API group""
Sometimes, at random, the operator will try to manipulate either a
Deployment
in API group""
or aConfigMap
in API groupapps
. This fails the release, as Helm tries to manipulate resources that do not exist. When the release is tried again, it might fail again (a different resource might be the problem) or it might succeed.Eventually, all resources are properly reconciled. The impact of this is that the reconciliation takes significantly more time.
What did you do?
What did you expect to see?
The Helm releases are reconciled successfully with no errors.
What did you see instead? Under which circumstances?
The following errors appear:
Environment
Operator type:
/language helm
Kubernetes cluster type:
Google Kubernetes Engine
$ operator-sdk version
"v1.26.0", commit: "cbeec475e4612e19f1047ff7014342afe93f60d2", kubernetes version: "1.25.0", go version: "go1.19.3", GOOS: "linux", GOARCH: "amd64"
Docker image: quay.io/operator-framework/helm-operator:v1.26.0
(Note that this also happens with operator-sdk 1.19.1.)
$ kubectl version
Possible Solution
The text was updated successfully, but these errors were encountered: