-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Kubectl client outside of HA/multi-master Epiphany cluster fails to connect to server with invalid certificate #1520
Comments
Example error message is: |
Thank you for reporting the issue, @ks4225 ! I've checked that indeed in non-HA and HA clusters the kubeconfig handling differ. I believe, two things need to be done to fix the problem:
All this should be done during the As a temporary workaround some kind of tcp proxy can be used, for example: $ ssh -L 3446:localhost:3446 [email protected] -N $ kubectl --kubeconfig admin.conf get nodes,pods -A
NAME STATUS ROLES AGE VERSION
node/x1a1 Ready master 58m v1.18.6
node/x1a2 Ready master 10m v1.18.6
node/x1a3 Ready master 9m12s v1.18.6
node/x1b1 Ready <none> 56m v1.18.6
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-74c98659f4-5c6tj 1/1 Running 0 57m
kube-system pod/coredns-74c98659f4-hc7fw 1/1 Running 0 57m
kube-system pod/etcd-x1a1 1/1 Running 0 58m
kube-system pod/etcd-x1a2 1/1 Running 0 10m
kube-system pod/etcd-x1a3 1/1 Running 0 9m1s
kube-system pod/kube-apiserver-x1a1 1/1 Running 1 58m
kube-system pod/kube-apiserver-x1a2 1/1 Running 0 10m
kube-system pod/kube-apiserver-x1a3 1/1 Running 0 9m1s
kube-system pod/kube-controller-manager-x1a1 1/1 Running 2 58m
kube-system pod/kube-controller-manager-x1a2 1/1 Running 0 10m
kube-system pod/kube-controller-manager-x1a3 1/1 Running 0 9m1s
kube-system pod/kube-flannel-ds-amd64-5cmmr 1/1 Running 0 9m12s
kube-system pod/kube-flannel-ds-amd64-9wk8s 1/1 Running 0 58m
kube-system pod/kube-flannel-ds-amd64-btbmt 1/1 Running 1 10m
kube-system pod/kube-flannel-ds-amd64-j7s4c 1/1 Running 0 56m
kube-system pod/kube-proxy-5zvck 1/1 Running 1 56m
kube-system pod/kube-proxy-nfgld 1/1 Running 1 58m
kube-system pod/kube-proxy-q5rnd 1/1 Running 0 9m12s
kube-system pod/kube-proxy-ww4tf 1/1 Running 0 10m
kube-system pod/kube-scheduler-x1a1 1/1 Running 2 58m
kube-system pod/kube-scheduler-x1a2 1/1 Running 0 10m
kube-system pod/kube-scheduler-x1a3 1/1 Running 0 9m1s
kubernetes-dashboard pod/dashboard-metrics-scraper-667d84869b-tv8d2 1/1 Running 0 57m
kubernetes-dashboard pod/kubernetes-dashboard-78fbf9d49c-qs7nr 1/1 Running 0 57m It's not very convenient though :( |
Hello @ks4225, |
Thank you for the update @tolikt. We have actually been using |
@przemyslavic @atsikham why is it back in pipeline? Can you leave any comment? |
I did some testing by following the instructions posted here to reproduce the issue. I deployed an HA cluster with public IP addresses on Azure, then logged into one machine (other than the master/node), copied admin.conf from one of the masters, replaced
|
Reported an issue [BUG] Duplicated SANs for K8s apiserver certificate #1587
|
The fix has been tested. Now there should be no issues with running |
Describe the bug
On a HA / multi-master, issuing
kubectl
commands from a machine outside the cluster (e.g. CI agent) will sometime fail with a certificate error. The thought is that the HAProxy on the k8s master machines ends up routing thekubectl
in a way that mismatches with the config on the external machine.To Reproduce
Steps to reproduce the behavior:
localhost
needs to be replaced in the kube config)kubectl
commands from the external machine, which will fail periodically (depending on how traffic is routed)Expected behavior
It should be possible to issue
kubectl
commands from the external machine that work consistently.Config files
Key aspects of the config are:
OS (please complete the following information):
Cloud Environment (please complete the following information):
Additional context
Add any other context about the problem here.
cc @jsmith085 @sunshine69
The text was updated successfully, but these errors were encountered: