Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

During upgrade, Kubespray does not update /etc/kubernetes/manifests #4736

Closed
juliohm1978 opened this issue May 9, 2019 · 13 comments · Fixed by #5609
Closed

During upgrade, Kubespray does not update /etc/kubernetes/manifests #4736

juliohm1978 opened this issue May 9, 2019 · 13 comments · Fixed by #5609
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@juliohm1978
Copy link
Contributor

juliohm1978 commented May 9, 2019

Environment:

  • Cloud provider or hardware configuration: Barebone/VMs

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):

Linux 4.15.0-48-generic x86_64
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

  • Version of Ansible (ansible --version): 2.7.9

Kubespray version (commit) (git rev-parse --short HEAD):

https://github.com/kubernetes-sigs/kubespray/tree/v2.10.0

Network plugin used: Calico

Copy of your inventory file:

inventory.ini.txt

Command used to invoke ansible:

ansible-playbook cluster.yml -b -i inventory.ini

Output of ansible run:

ansible-output.txt

Anything else do we need to know:
While there are no errors during the ansible playbook execution, Kubespray did not update the static Pod manifests in the master nodes:

infrasw@k8s-master01-lab-20190425:~$ ls -l /etc/kubernetes/manifests
total 12
-rw------- 1 root root 3494 May  9 02:11 kube-apiserver.yaml
-rw------- 1 root root 3181 May  9 02:12 kube-controller-manager.yaml
-rw------- 1 root root 1065 May  9 02:12 kube-scheduler.yaml

I only noticed when upgrading from Kubernetes v1.12.x to v1.14.x and my Pods were failing to start because the kubelet was running the new v1.14.x and the apiserver was still at v1.12.x.

I tried using Kubespray v2.9.0, but got the same results. I can temporarily move the /etc/kubernetes/manifests directory before running the playbook, but Kubespray won't recreate the manifests and the installation fails.

Is there anything I can run after installation to update these pods? I resorted to manually editing the manifests to bump up the image version in these components, but that's not a long term solution.

@juliohm1978 juliohm1978 added the kind/bug Categorizes issue or PR as related to a bug. label May 9, 2019
@chenww
Copy link

chenww commented May 10, 2019

I can upgrade to 1.14.1 from 1.12.5 without issue as all manifest files are properly generated. I do not see you use upgrade playbook. not sure if this makes difference.

@juliohm1978
Copy link
Contributor Author

Yes, I had the chance to test this today. The upgrade-cluster.yml playbook works. Shouldn't cluster.yml work in the same way?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 8, 2019
@jklare
Copy link

jklare commented Sep 4, 2019

Hi, I just ran into a very similar issue while upgrading from 1.14.3 to 1.14.6. Kubelet was upgraded fine, but kube-api, -controller-manager and -scheduler manifests were not touched. After manually editing these files, everything seems to run fine. Should Kubespray itself touch/change these files or is that something that is handled by kubeadm?

@zhangsimingshannonai
Copy link

Did you figure it out? @juliohm1978 , I met the same issue.
I only want to update a parameter of kube-apiserver(on my running cluster),so I only need to modify something in "/etc/kubernetes/mainfest/kube-apiserver.yaml",but exectue cluster.yml didn't work.

@juliohm1978
Copy link
Contributor Author

juliohm1978 commented Sep 6, 2019

@zhangsimingshannonai

We never found a good solution to this. We got it fixed by running upgrade-cluster.yml, which I guess it the supported way of uprading the cluster to a newer version.

I suppose cluster.yml was meant to be used to create a new cluster, but I'd expect all manifests to be updated just the same. That just doesn't happen.

@jklare
Copy link

jklare commented Sep 9, 2019

I agree with @juliohm1978, upgrade-cluster.yml works like expected and document in the docs, while cluster.yml seems to just update the kubelet version and not touch the resources managed by kubeadm (like the manifests). I think this is an open issue and should be either fixed in the cluster.yml playbook or the docs.

@RichardStevenson
Copy link

RichardStevenson commented Oct 7, 2019

I've just tried an upgrade from 1.12.5 (deployed via ks 2.8) to 1.13.6 using ks 2.10. For me this leaves old manifest files in place for kube-proxy and regardless of whether i use cluster.yml or upgrade-cluster.yml.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Nov 6, 2019
@jklare
Copy link

jklare commented Nov 7, 2019

/remove-lifecycle rotten

@mhulscher
Copy link

mhulscher commented Dec 16, 2019

I am seeing the exact same thing. I am trying to upgrade from v1.15.4 to v1.15.6 and the manifests in /etc/kubernetes/manifests are not updated. It's not just the versions. Changes to other options like admission controllers are also ignored.

Edit: using the latest kubespray release, v2.12.0

Edit2: never mind. Using upgrade-cluster.yml does the job. I was confused because the playbook updates ALL the kubelets before it starts with the masters.

@Flowkap
Copy link
Contributor

Flowkap commented Feb 6, 2020

Same for us upgrade-cluster works. cluster.yml not.
Still https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md states that cluster.yml also should work upgrading (just unsafe) but it also leaves us with old manifests.

For us upgrade-cluster.yml sometimes times out though. So it leaves us with the problem of proper upgrading at all :(

cluster.yml runs always through.


Update found why its blocking: We have templ single node clusters for testing giving same host 2 roles (master + node) resulting in a cordoned node which can't start the upgraded coredns. Its a setup issue. The upgrade issue itself with cluster.yml persists


Another update:

upgrade_cluster_setup variable is only set in upgrade-cluster. Please refer to #4139

specifically add -e upgrade_cluster_setup=true to cluster.yml runs.

@Flowkap
Copy link
Contributor

Flowkap commented Feb 7, 2020

Opened #5609 to add info to the upgrade docs as that was actually the first place I went looking and found no hint.

k8s-ci-robot pushed a commit that referenced this issue Feb 17, 2020
…pgrade using cluster.yml playbook. Relates to #4736 and #4139 (#5609)
LuckySB pushed a commit to southbridgeio/kubespray that referenced this issue Mar 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants