-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] akv2k8s-ca configmap disappears after some hours never to come back #147
Comments
Thanks for the report. I thought I do something wrong when exactly same behavior happens in our environment. |
We are hitting this in one of our qa aks clusters today during a deploy. 5 out of our 6 clusters are ok but the last one fails with this:
We are unable to deploy any new resources there. It is running AKS v1.19.1 the others are using a combination of AKS v1.17.13 & v1.18.10 We are using: Some more info on this, tainting our |
Any updates? This has been happening in my AKS cluster |
What I have done temporarily until this gets fixed is export the ca configmap |
Thanks! I'll try your fix, hoping it works for me as well! |
The whole ca configmap sync thing is removed in 1.2-beta (soon to be released) and solved much more elegantly. This probably should be gone in 1.2, but I'll wait for confirmation before closing. |
As mentioned this is fixed in 1.2 and we have confirmed. Closing. |
Recently upgraded to 1.20.05 v too. What am I facing ?
Can you guys please help overcome this issue? Impact : all the working pods are in "CreateConfigError" state. Any leads, much appreciated. |
Hello I am using 1.1.28 chart version and I have no values file. I am using the default values. I will continue with the update of the remaining 2 clusters from 1.18.10 to 1.19.11 What versions should i use in order not to have this issue? |
@torresdal just to confirm is this fixed in the app_version or the chart_version 1.2? |
Just upgraded AKS nodes to 1.20.x and all my pods are pending with the configmap "akv2k8s-ca" not found error. |
Controller, version: spvest/azure-keyvault-controller:1.1.0
Env-Injector (webhook), version: spvest/azure-keyvault-webhook:1.1.10
Currently i'm in Azure AKS and the nodes are on K8S version v1.19.0
The
akv2k8s-ca
gets installed into enabled namespaces the first time (helm install) just fine but after some random amount of time it disappears. Logging the env injector shows:Attempts to restart the akv2k8s containers sometimes makes it recreate the CA cert, but most of the time it doesn't. Running
k delete po -n akv2k8s $(k get po --no-headers -n akv2k8s | awk '{print $1}')
to restart everything works and the CA injector logs look fine:As you can see it says
Successfully synced CA Bundle to new namespace 'ccs-decisio'
which tells me it synced it.Alas:
It's not coming back. Seems the env injector is getting itself into a state where it deletes the configmap and never remakes it. Restarting the pods seems to fix it some of the time. Complete delete and re-install of helm chart will fix it.
Here is me live deleting the injection pod and watching the cm in my namespace:
It's there, and then it's gone.
The text was updated successfully, but these errors were encountered: