You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Task 'Join to Kubernetes cluster' may fail when Ansible vault already exists.
How to reproduce
Repro steps:
Create cluster (build directory: offline-rhel-1)
Run kubeadm reset on master or restore VM with master
Run kubernetes_master playbook: ansible-playbook -i /workspaces/epiphany/core/src/epicli/clusters/build/offline-rhel-1/inventory --vault-password-file /home/vscode/.epicli/vault.cfg /workspaces/epiphany/core/src/epicli/clusters/build/offline-rhel-1/ansible/kubernetes_master.yml
Run kubernetes_node playbook
Analysis:
[root@vm2 ~]# cat /etc/kubeadm/kubeadm-join-node.yml
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: 192.168.10.11:6443
token: ev00qa.ow2vxl8aroii1vb6
caCertHashes:
- sha256:503e9331e74bbdf9f270f986c3ca97e46dc493380ec13d5630d6688daf4d0918
nodeRegistration:
kubeletExtraArgs:
enable-controller-attach-detach: "false"
node-labels: node-type=epiphany
[root@vm2 ~]# kubeadm join --token=ev00qa.ow2vxl8aroii1vb6 --discovery-token-ca-cert-hash sha256:503e9331e74bbdf9f270f986c3ca97e46dc493380ec13d5630d6688daf4d0918 --v=9 192.168.10.11:6443
[...]
I0417 20:04:40.095847 12359 token.go:191] [discovery] Failed to connect to API Server "192.168.10.11:6443": cluster CA found in cluster-info configmap is invalid: none of the public keys "sha256:f461f70f03163054e12db5b991c7bc25a97ac0060ce304c9211b85b274a0d689" are pinned
Hash of /etc/kubernetes/pki/ca.crt after kubeadm init:
f461f70f03163054e12db5b991c7bc25a97ac0060ce304c9211b85b274a0d689
Hash of /etc/kubernetes/pki/ca.crt after running tasks from copy-kubernetes-pki.yml:
503e9331e74bbdf9f270f986c3ca97e46dc493380ec13d5630d6688daf4d0918
vscode@580e9522a24d:/workspaces/epiphany/core/src/epicli$ date
Tue Apr 21 09:31:09 UTC 2020
vscode@580e9522a24d:/workspaces/epiphany/core/src/epicli$ date -r /workspaces/epiphany/core/src/epicli/clusters/build/offline-rhel-1/vault/kubernetes-pki.yml
Tue Apr 7 17:15:09 UTC 2020
Expected behavior
Task 'Join to Kubernetes cluster' should join cluster without any problems when Ansible vault already exists.
to-bar
changed the title
Task 'Join to Kubernetes cluster' fails on-prem when Ansible vault already exists
Task 'Join to Kubernetes cluster' may fail when Ansible vault already exists
Apr 21, 2020
erzetpe
changed the title
Task 'Join to Kubernetes cluster' may fail when Ansible vault already exists
[BUG] Task 'Join to Kubernetes cluster' may fail when Ansible vault already exists
Dec 9, 2021
Describe the bug
Task 'Join to Kubernetes cluster' may fail when Ansible vault already exists.
How to reproduce
Repro steps:
kubeadm reset
on master or restore VM with masteransible-playbook -i /workspaces/epiphany/core/src/epicli/clusters/build/offline-rhel-1/inventory --vault-password-file /home/vscode/.epicli/vault.cfg /workspaces/epiphany/core/src/epicli/clusters/build/offline-rhel-1/ansible/kubernetes_master.yml
Analysis:
Hash of /etc/kubernetes/pki/ca.crt after kubeadm init:
f461f70f03163054e12db5b991c7bc25a97ac0060ce304c9211b85b274a0d689
Hash of /etc/kubernetes/pki/ca.crt after running tasks from copy-kubernetes-pki.yml:
503e9331e74bbdf9f270f986c3ca97e46dc493380ec13d5630d6688daf4d0918
Expected behavior
Task 'Join to Kubernetes cluster' should join cluster without any problems when Ansible vault already exists.
Environment
All
Additional context
#2669
DoD checklist
The text was updated successfully, but these errors were encountered: