-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CrashLoopBackOff / Error on Raspberry Pi 4 #3389
Comments
Just seeing that they're crashlooping doesn't give us much to work with. Can you provide For the not-ready node, what do the k3s logs show? |
After rebooting, was able to reproduce again
For
For
For
|
Logs from master node,
|
Logs from worker node
|
It looks like pods on the worker are unable to communicate with services hosted on the server. I see a lot of IPv6 addresses in both the pod logs and the agent logs, but the flannel CNI does not support IPv6 at the moment. Can you try disabling IPv6 on both nodes, and then rebooting? |
This is not a solution tho, it's a workaround disabling IPv6, which could be needed by some users... it looks like flannel's contributors are working on it right now: flannel-io/flannel#1448 |
This issue is used to track the
CrashLoopBackOff
error on Raspberry Pi 4 instead of Cent OSEnvironmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
1 server, 1 agent (both are Raspberry pi 4)
Describe the bug:
Opening another issue as suggested #1019 (comment) to keep track of the pod
CrashLoopBackOff
Steps To Reproduce:
Expected behavior:
Expected both nodes to be ready
Actual behavior:
worker node was never ready
Additional context / logs:
If I execute the kill-all script and reboot the server with
k3s server start
it starts working again, but this is inconvenient.The text was updated successfully, but these errors were encountered: