-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to renew the certificate when apiserver cert expired? #581
Comments
Duplicate of #206. |
@zalmanzhao did you manage to solve this issue? I created a kubeadm Adding The cluster is working, but it lives its own life until it self-destroys or until things get fixed 😅 Unfortunately, I could not find a solution for my situation in #206 and am wondering how to get out of it. The only relevant material I could dig out was a blog post called ‘How to change expired certificates in kubernetes cluster’, which looked promising at first glance. However, it did not fit in the end because my master machine did not have @errordeveloper could you please recommend something? I'd love to fix things without |
@kachkaev Did you have any luck on renewing the certs without resetting the kubeadm?
|
Unfortunately, I gave up in the end. The solution was to create a new cluster, restore all the payload on it, switch DNS records and finally delete the original cluster 😭 At least there was no downtime because I was lucky enough to have healthy pods on the old k8s during the transition. |
Thanks @kachkaev for responding. I will nonetheless give it another try. |
If you are using a version of kubeadm prior to 1.8, where I understand certificate rotation #206 was put into place (as a beta feature) or your certs already expired, then you will need to manually update your certs (or recreate your cluster which it appears some (not just @kachkaev) end up resorting to). You will need to SSH into your master node. If you are using kubeadm >= 1.8 skip to 2.
There is an important note here. If you are on AWS, you will need to explicitly pass the Please be certain to replace the values passed in
If you do not have a valid token. You can create one with:
The token should look something like 6dihyb.d09sbgae8ph2atjw
Hopefully this gets you where you need to be @davidcomeyne. |
Thanks a bunch @danroliver ! |
@danroliver Thanks! Just tried it on an old single-node cluster, so did steps up to 7. It worked. |
@danroliver Worked for me. Thank you. |
Did not work for me, had to set up a new cluster. But glad it helped others! |
thank you @danroliver . it works for me |
Thanks @danroliver putting together the steps. I had to make small additions to your steps. My cluster is running v1.9.3 and it is in a private datacenter off of the Internet. On the Master
On the minionsI had to move
|
Thanks @danroliver! Only my single-node cluster it was enough to follow steps 1-6 (no reboot) then send a |
Thanks a lot @danroliver! On our single-master/multi-workers cluster, doing the steps from 1 to 7 were enough, we did not have to reconnect every worker node to the master (which was the most painful part). |
Thanks for this great step-by-step, @danroliver! I'm wondering how this process might be applied to a multi-master cluster (bare metal, currently running 1.11.1), and preferably without downtime. My certs are not yet expired, but I am trying to learn how to regenerate/renew them before that happens. |
@kcronin |
@danroliver: Thank you very much, it's working. It's not necessary to reboot the servers. systemctl restart kubelet |
I had to deal with this also on a 1.13 cluster, in my case the certificates were about to expire so slightly different Since the certs had not expired, the cluster already had workloads which I wanted to continue working So at a high level I had to
Lets create a new token for nodes re-joining the cluster (After kubelet restart)
Now for each worker - one at a time
ssh to worker node
Back to master and uncordon the worker
After all workers have been updated - Remove token - will expire in 24h but lets get rid of it
|
@pmcgrath Thanks for posting those steps. I managed to follow them and renew my certificates, and get a working cluster. |
This is what I need only for 1.14.2 .. any hints on how to
I know this issue is closed but I have the same problem on 1.14.2 and the guide gives no errors but I cannot connect to the cluster and reissue the token (I get auth failed) |
A k8s cluster created using I had to refer to 4 different sources to renew the certificates, regenerate the configuration files and bring the simple 3 node cluster back. @danroliver gave very good and structured instructions, very close to the below guide from IBM.
Problem with step 3 and step 5 Step 3 should NOT have the phase in the command $ sudo kubeadm alpha certs renew apiserver
$ sudo kubeadm alpha certs renew apiserver-kubelet-client
$ sudo kubeadm alpha certs renew front-proxy-client Step 5 should be using below, # kubeadm init phase kubeconfig all
I0705 12:42:24.056152 32618 version.go:240] remote version is much newer: v1.15.0; falling back to: stable-1.14
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file |
Note about tokens in K8s |
Hi @kcronin, how did you solved with multi-master config? I don't know how to proceed with --apiserver-advertise-address as I have 3 IPs and not only one. Thanks |
@pmcgrath In case I have 3 masters, should I repeat the steps on each master? or what is the . case |
@SuleimanWA, you can copy |
@anapsix
Which token are you referring to in this case? Is the one generated by kubeadm or how can I delete the token ? -----UPDATE----- |
high version use: kubeadm alpha certs renew all |
When first master node's kubelet down (systemctl stop kubelet), other master nodes can't contact CA on the first master node. This resulting in the following message until kubelet on original master node brought back online: kubectl get nodes Is there a way to have CA role transfer to other master nodes while the kublet on original CA node down? |
Hi, i have done this task but not on 1.13 version. May i ask few things if you have done this already? Do i still need to create a token and run join on worker nodes? If possible, can you shares the steps you performed? |
@pmcgrath thanks a bunch for your comment, I used the instructions to update certificates on my Kubernetes 1.13 cluster. |
simplest way to update your k8s certs
|
you might also what to symlink the cert / key to files if kubelet client cert is enabled (it is by default):
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/ |
For k8s 1.15 ~ 1.18, this may be helpful: https://zhuanlan.zhihu.com/p/382605009 |
Hello, After following this ^ everything is OK (kubectl get nodes shows both nodes Ready) BUT! alot of pods (in kube-system and in all other namespaces) are stuck in ContainerCreating state.
The thing is that now i think that nobody can talk to kube-apiserver:
All the pods that are stuck in ContaierCreating shows this in description:
I have a cluster of 2 nodes:
|
that seems like a CNI plugin problem. |
This cluster is made with kubespray, i cannot delete calico and add it again. Also, i don't think this is a problem with the CNI. Why does kube-apiserver logs |
For anyone that stumbles upon this in the future, which are running a newer version of kubernetes >1.17, this is probably the simplest way to renew your certs. The following renews all certs, restarts kubelet, takes a backup of the old admin config and applies the new admin config: kubeadm certs renew all
systemctl restart kubelet
cp /root/.kube/config /root/.kube/.old-$(date --iso)-config
cp /etc/kubernetes/admin.conf /root/.kube/config |
@danroliver, |
For this case, you may still need to ssh into these 3 master node and update the certificates by providing commands cause each master node have their individual api server. |
Thank you @titaneric, understood i need to recreate/renew the certificates across all master node separaetly. What about the Step 4 and Step5 step 4 - moving the below old files/etc/kubernetes/admin.conf Step5 - Generating admin.conf, kubelet.conf, controller-manager.conf and scheduler.confUsing below command Because I can see only the cksum value of admin.conf is same across all master nodes but |
Hi @kruserr |
Can't say, have not seen that before. |
Is this a request for help?
If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.
If no, delete this section and continue on.
What keywords did you search in kubeadm issues before filing this one?
If you have found any duplicates, you should instead reply there and close this page.
If you have not found any duplicates, delete this section and continue on.
Is this a BUG REPORT or FEATURE REQUEST?
Choose one: BUG REPORT or FEATURE REQUEST
Versions
kubeadm version (use
kubeadm version
):1.7.5Environment:
kubectl version
):1.7.5uname -a
):What happened?
What you expected to happen?
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?
The text was updated successfully, but these errors were encountered: