-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bootstrap of HA cluster fails with when agent bootstrap token used as server token #7313
Comments
Here is the full log for server a:
And server b:
Maybe another race condition? That both logged |
I think #7215 just isn't working and now instead of servers coming up overwriting the certs it just stuck in this condition. Was the locking mechanism tested with 'old tokens'? I assume 'old tokens' refers to 'short' tokens? https://docs.k3s.io/cli/token#short |
Doesn't even bootstrap with a single server:
|
Seem this is a regression. Went back to v1.26.3+k3s1 which works (well, as long as I only run one initial server) |
It looks like for some reason you're using a kubeadm-style bootstrap token for the server token: an id and secret joined by a period. Don't do that, you are likely confusing the token parser. We do support kubeadm style bootstrap tokens - but not for servers, as covered in the docs. Just use a random alphanumeric string of arbitrary length for the server token. |
@brandond hrmm I got this from the docs here: https://docs.k3s.io/cli/token#k3s-token-generate
I guess there are not bootstrap but join tokens? Some clarification here https://docs.k3s.io/datastore/ha would be great! |
See the docs at https://docs.k3s.io/cli/token#token-types which describes the three token types, and says:
The format for
I'll leave this issue open to track having a better error message when |
##Environment Details Infrastructure
Node(s) CPU architecture, OS, and version:
Cluster Configuration: Config.yaml:
Existing behavior generates a lot of logs with bad certificates and TLS handshake errors when using a malformed token
Results: Cluster node rightfully fails to startup with a malformed token but the reason isn't obvious
Confirmed better error surfaces to the use with the malformed tokenValidation Steps
Results: Expect to observe the following output to stdout
from journalctl -u k3s
|
Environmental Info:
K3s Version: v1.26.4-rc1+k3s1
Node(s) CPU architecture, OS, and Version:
Linux ip-10-10-190-56 5.10.0-21-cloud-arm64 #1 SMP Debian 5.10.162-1 (2023-01-21) aarch64 GNU/Linux (AWS Graviton)
Cluster Configuration:
2 servers, 1 agent
Describe the bug:
When starting the servers using a mysql (maria) storage endpoint with a empty database, bootstrapping fails with:
Steps To Reproduce:
...where k3s_token is generated by this TF snippet:
and datastore_endpoint is:
"mysql://${aws_db_instance.k3s.username}:${random_password.k3s_password.result}@tcp(${aws_db_instance.k3s.endpoint})/${aws_db_instance.k3s.db_name}"
Expected behavior:
Bootstrap working
Actual behavior:
Bootstrap not working
Additional context / logs:
Here is the db content. It looks like something created the bootstrap key with NULL value. Probably make sense to make this field non null to catch the error earlier:
The text was updated successfully, but these errors were encountered: