-
Notifications
You must be signed in to change notification settings - Fork 637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm upgrade is not detecting AWX kind updates #1239
Comments
I guess as there are no templates related to awx configmap and awx deployment (they are in the image build creation) to be detected those changes in helm, it's updated the AWX Kind resource but no change is taking until the awx deployment is deleted. |
I can't speak to helm.. but as long as it got changed on the AWX spec and the syntax for the extra_settings added is correct, it should also get changed in the configmap. However, it will take ~1-2 minutes to be reflected in the awx configmap. This is because once you change the AWX spec, the reconciliation loop gets triggered and runs the Correction: When testing this out locally, I find that the ConfigMap is updated properly, but the /etc/tower/settings.py files is not updated without cycling the deployment pods. To test this, I added the following change in my AWX spec:
Useful command for checking if the change was reflected in the ConfigMap:
Useful command for checking if the change was reflected in the container:
|
It seems that the task is not getting marked as However, when a new key/value pair is added to a configMap, the task is marked as changed.
I wasn't able to reproduce with a playbook outside the reconciliation loop. Here is what I tried: I tried running this inside the container and on my dev machine, so that rules out different kubernetes.core versions. Maybe there is some nuance to how the installer role is run by the operator/reconciliation loop that I am missing... |
If I create a
And run the installer role directly:
I see that the task is marked as changed: However, I don't see this when it is run as part of the reconciliation loop... cc @TheRealHaoLiu any ideas here? |
Thanks for your feedback! Focused on helm... Helm is upgrading based on the templates built for the chart, for instance using a helm upgrade (with debug option), we can see what helm is trying to upgrade: root@dmorillas-dev:~/dmorillas/awx-iac-gitops# helm upgrade awx -n dev-ops-playground helm-chart-awx-operator/ -f helm-chart-awx-operator/values.yaml --debug Of course, it's patching changes in AWX resource but not the configmap (That should do it for the LDAP change...). Besides that, Helm is not able to upgrade any of the values.yaml configuration , for instance, I tested with image and image_version parameters. I guess Helm should take any action once AWX resource is updated (redeploy the pod, update the specific configmap without redeploy it...) |
Does anyone have any ideas or is the same thing happening? |
@danielmorillas this should be fixed as part of this commit which was merged yesterday as part of : This will be in the next release, which I believe is Monday. Please open another issue linking to this one if it is not fixed by that patch. |
Please confirm the following
Bug Summary
I have a custom deployment with helm and fluxcd which no updates from values.yaml were detected.
So, I manually tested the official helm chart for version 1.1.3 and 1.1.4 and same issues.
I install the chart with a specific values.yaml and all is ok, once I upgrade the chart with a new change (i.e LDAP Manager DN update) it's not completely detected.
AWX resource is modified but the configmap keeps the value of the first installation.
AWX Operator version
1.1.4
AWX version
latest
Kubernetes platform
kubernetes
Kubernetes/Platform version
1.23
kind v0.11.0 go1.16.4 linux/amd64
Modifications
no
Steps to reproduce
#After helm upgrade ...
kubectl get configmap -n dev-ops-playground -o yaml awx-awx-configmap | grep "testldap"
AUTH_LDAP_BIND_DN = "testldap"
...
kubectl get awx -n dev-ops-playground -o yaml awx | grep "testldap2"
value: '"testldap2"'
Expected results
Configmap should be updated but once execute helm upgrade command it's not detecting any update in the configmap.
Actual results
helm upgrade awx awx-operator/awx-operator -n dev-ops-playground -f ../helm-chart-awx-operator/values.yaml --debug
upgrade.go:142: [debug] preparing upgrade for awx
upgrade.go:150: [debug] performing update for awx
upgrade.go:322: [debug] creating upgraded release for awx
client.go:218: [debug] checking 12 resources for changes
client.go:501: [debug] Looks like there are no changes for ServiceAccount "awx-operator-controller-manager"
client.go:501: [debug] Looks like there are no changes for ConfigMap "awx-operator-awx-manager-config"
...
--
Source: awx-operator/templates/awx-deploy.yaml
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
...
spec:
...
value: '"testldap2"'
...
Release "awx" has been upgraded. Happy Helming!
Additional information
The only way that I saw how this can change is deleting the awx deployment and the new value is applied.
Same error in Openshift cluster!
Operator Logs
No response
The text was updated successfully, but these errors were encountered: