-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K3s fails to start after running k3s certificate rotate-ca
#11014
Milestone
Comments
This was referenced Oct 8, 2024
Validated on master using commit 054cec8 | version v1.31Environment Details:Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
Files:
Steps:
Reproduction of the Issue: - Observations:
Validation of the Issue: - Observations:
|
This comment was marked as off-topic.
This comment was marked as off-topic.
@pascaliske please open a new issue or discussion and fill out the issue template. Without knowing more about your cluster configuration, and whether or not you have any etcd/database snapshots to recover from, I can't recommend any specific steps. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Environmental Info:
K3s Version:
v1.31.1+k3s1
Node(s) CPU architecture, OS, and Version:
n/a
Cluster Configuration:
n/a
Describe the bug:
After generating updated CA certs and updating the datastore with the
k3s certificate rotate-ca
command, K3s fails to restart with the following error:Oct 08 18:31:31 server-0 k3s[9638]: time="2024-10-08T18:31:31Z" level=fatal msg="/var/lib/rancher/k3s/server/cred/ipsec.psk, /var/lib/rancher/k3s/server/cred/passwd newer than datastore and could cause a cluster outage. Remove the file(s) from disk and restart to be recreated from datastore."
If the token has not been manually specified in the config file and the files are removed, K3s will start once successfully, but subsequent restarts will fail because the token in the passwd file will have been regenerated and no longer match the bootstrap data:
Oct 08 23:02:20 systemd-node-1 k3s[6631]: time="2024-10-08T23:02:20Z" level=fatal msg="starting kubernetes: preparing server: bootstrap data already found and encrypted with different token"
Steps To Reproduce:
Expected behavior:
CA certs rotate successfully without causing problems
Actual behavior:
Problems
Additional context / logs:
Regression introduced by #10710
This caused e2e tests to fail, but apparently we didn't check e2e results during last month's release cycle.
The text was updated successfully, but these errors were encountered: