-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Agent status become Not_Ready after reboot #4113
Comments
Is your system for some reason deleting the |
From my personal experience, there is no file in |
Are you initially starting it with a different working directory or configuration than is used when you restart it? This isn't an issue I've seen anyone run into before so I'm wondering what about your environment or configuration is unique. |
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions. |
The same problem encountered. I got the same problem after started k3s by running 'install.sh' without error. |
Environmental Info:
K3s Version:
v1.21.4+k3s1 (3e250fd)
go version go1.16.6
Node(s) CPU architecture, OS, and Version:
amd64
Linux gamma 5.4.0-81-generic #91-Ubuntu
Cluster Configuration:
3 servers: 1 master 2 agents
Describe the bug:
Steps To Reproduce:
There is another way to re-produce the issue
Expected behavior:
The shutdown agent should join the cluster automatically, and status should be Ready.
Actual behavior:
The shutdown agent should join the cluster automatically, but status is Not Ready.
Additional context / logs:
Logs
Describe the node or
systemctl status k3s
,I can see a lot of logs like
Not worked Solution
k3s-node
in agent does not work.k3s
in master does not work.Solution
I get one solution to solve the problem. I find out that the folder
/etc/cni/net.d
is empty. So I copy/var/lib/rancher/k3s/agent/etc/cni/net.d/10-flannel.conflist
to/etc/cni/net.d/
andsystemctl restart k3s-node
. It works and agent comes to statusReady
Similar issues
kubeadm-issue-1031
Backporting
The text was updated successfully, but these errors were encountered: