Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agent status become Not_Ready after reboot #4113

Closed
1 task
ivyxjc opened this issue Sep 29, 2021 · 5 comments
Closed
1 task

Agent status become Not_Ready after reboot #4113

ivyxjc opened this issue Sep 29, 2021 · 5 comments

Comments

@ivyxjc
Copy link

ivyxjc commented Sep 29, 2021

Environmental Info:
K3s Version:
v1.21.4+k3s1 (3e250fd)
go version go1.16.6

Node(s) CPU architecture, OS, and Version:
amd64
Linux gamma 5.4.0-81-generic #91-Ubuntu

Cluster Configuration:

3 servers: 1 master 2 agents

Describe the bug:

Steps To Reproduce:

  • Installed K3s (use ansible to install k3s-ansible, no more custom args)
  • shutdown one agent
  • restart the agent

There is another way to re-produce the issue

  • Installed K3s (use ansible to install k3s-ansible, no more custom args)
  • restart service k3s in master

Expected behavior:

The shutdown agent should join the cluster automatically, and status should be Ready.

Actual behavior:

The shutdown agent should join the cluster automatically, but status is Not Ready.

Additional context / logs:

Logs

Describe the node or systemctl status k3s,
I can see a lot of logs like

"NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

Not worked Solution

  1. Restart service k3s-node in agent does not work.
  2. Restart service k3s in master does not work.

Solution

I get one solution to solve the problem. I find out that the folder /etc/cni/net.d is empty. So I copy /var/lib/rancher/k3s/agent/etc/cni/net.d/10-flannel.conflist to /etc/cni/net.d/ and systemctl restart k3s-node. It works and agent comes to status Ready

Similar issues

kubeadm-issue-1031

Backporting

  • Needs backporting to older releases
@brandond
Copy link
Member

Is your system for some reason deleting the /etc/cni directory when the node is restarted? K3s does not expect things in /etc/ to be deleted out from under it when restarting.

@ivyxjc
Copy link
Author

ivyxjc commented Sep 30, 2021

Is your system for some reason deleting the /etc/cni directory when the node is restarted? K3s does not expect things in /etc/ to be deleted out from under it when restarting.

From my personal experience, there is no file in /etc/cni/net.d, all config file is automatically placed under /var/lib/rancher/k3s/agent/etc/cni/net.d/ after installing k3s. And it works correctly. But it looks like that k3s refer to directory /etc/cni/net.d other than /var/lib/rancher/k3s/agent/etc/cni/net.d/ when I restart the agent.

@brandond
Copy link
Member

brandond commented Sep 30, 2021

Are you initially starting it with a different working directory or configuration than is used when you restart it? This isn't an issue I've seen anyone run into before so I'm wondering what about your environment or configuration is unique.

@stale
Copy link

stale bot commented Mar 29, 2022

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label Mar 29, 2022
@stale stale bot closed this as completed Apr 12, 2022
@lidh15
Copy link

lidh15 commented Aug 16, 2022

The same problem encountered. I got the same problem after started k3s by running 'install.sh' without error.
I checked /etc and /etc/cni not found. I copied that one under /var/lib/rancher/k3s/agent/etc and restarted k3s but it didn't work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants