-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
During upgrade, Kubespray does not update /etc/kubernetes/manifests #4736
Comments
I can upgrade to 1.14.1 from 1.12.5 without issue as all manifest files are properly generated. I do not see you use upgrade playbook. not sure if this makes difference. |
Yes, I had the chance to test this today. The |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Hi, I just ran into a very similar issue while upgrading from 1.14.3 to 1.14.6. Kubelet was upgraded fine, but kube-api, -controller-manager and -scheduler manifests were not touched. After manually editing these files, everything seems to run fine. Should Kubespray itself touch/change these files or is that something that is handled by kubeadm? |
Did you figure it out? @juliohm1978 , I met the same issue. |
We never found a good solution to this. We got it fixed by running I suppose |
I agree with @juliohm1978, upgrade-cluster.yml works like expected and document in the docs, while cluster.yml seems to just update the kubelet version and not touch the resources managed by kubeadm (like the manifests). I think this is an open issue and should be either fixed in the cluster.yml playbook or the docs. |
I've just tried an upgrade from 1.12.5 (deployed via ks 2.8) to 1.13.6 using ks 2.10. For me this leaves old manifest files in place for kube-proxy and regardless of whether i use |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
I am seeing the exact same thing. I am trying to upgrade from v1.15.4 to v1.15.6 and the manifests in Edit: using the latest kubespray release, v2.12.0 Edit2: never mind. Using upgrade-cluster.yml does the job. I was confused because the playbook updates ALL the kubelets before it starts with the masters. |
Same for us upgrade-cluster works. cluster.yml not. For us upgrade-cluster.yml sometimes times out though. So it leaves us with the problem of proper upgrading at all :( cluster.yml runs always through. Update found why its blocking: We have templ single node clusters for testing giving same host 2 roles (master + node) resulting in a cordoned node which can't start the upgraded coredns. Its a setup issue. The upgrade issue itself with cluster.yml persists Another update: upgrade_cluster_setup variable is only set in upgrade-cluster. Please refer to #4139 specifically add -e upgrade_cluster_setup=true to cluster.yml runs. |
…pgrade using cluster.yml playbook. Relates to kubernetes-sigs#4736 and kubernetes-sigs#4139
Opened #5609 to add info to the upgrade docs as that was actually the first place I went looking and found no hint. |
…pgrade using cluster.yml playbook. Relates to kubernetes-sigs#4736 and kubernetes-sigs#4139 (kubernetes-sigs#5609)
Environment:
Cloud provider or hardware configuration: Barebone/VMs
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
):Linux 4.15.0-48-generic x86_64
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
ansible --version
): 2.7.9Kubespray version (commit) (
git rev-parse --short HEAD
):https://github.com/kubernetes-sigs/kubespray/tree/v2.10.0
Network plugin used: Calico
Copy of your inventory file:
inventory.ini.txt
Command used to invoke ansible:
ansible-playbook cluster.yml -b -i inventory.ini
Output of ansible run:
ansible-output.txt
Anything else do we need to know:
While there are no errors during the ansible playbook execution, Kubespray did not update the static Pod manifests in the master nodes:
I only noticed when upgrading from Kubernetes v1.12.x to v1.14.x and my Pods were failing to start because the kubelet was running the new v1.14.x and the apiserver was still at v1.12.x.
I tried using Kubespray v2.9.0, but got the same results. I can temporarily move the
/etc/kubernetes/manifests
directory before running the playbook, but Kubespray won't recreate the manifests and the installation fails.Is there anything I can run after installation to update these pods? I resorted to manually editing the manifests to bump up the image version in these components, but that's not a long term solution.
The text was updated successfully, but these errors were encountered: