-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
etcd, rejected connection, addrConn.createTransport failed #8512
Comments
👍 seeing this as well. certificate specifies an incompatible key usage", ServerName ". Is this metrics/health related? It comes every couple minutes, like a health check or metrics scrape that is using http and not https. |
we don't see this with etcd 3.3.13 and k8s 1.17.4. |
This is related to, and maybe the fix is the same. openshift/kubecsr@aad75d3#diff-654ba8df3be94fe94c9404a7727d1fa2 I was able to exec into the etcd-main pod and reproduce the issue. ./etcdctl --endpoints 127.0.0.1:4001 --cacert=/rootfs/mnt/master-vol-0963c543ee9f70a6d/pki/etcd-cluster-token-etcd/clients/ca.crt --cert=/rootfs/mnt/master-vol-0963c543ee9f70a6d/pki/etcd-cluster-token-etcd/clients/server.crt --key=/rootfs/mnt/master-vol-0963c543ee9f70a6d/pki/etcd-cluster-token-etcd/clients/server.key endpoint status {"level":"warn","ts":"2020-03-13T15:54:44.144Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///127.0.0.1:4001","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate""} by adding client usage for the server cert, it should fix the issue. I'm not sure what the direct PR in etcd that led to this warning would be. |
With etcd 3.3.13, this issue is still there at startup: WARNING: 2020/03/13 02:56:47 Failed to dial 0.0.0.0:4001: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry. We just see the warning once at startup however. |
I'm seeing this as well on a new cluster spin-up of Kubernetes 1.16. @linecolumn did you find a fix? Is that why you closed this? Would be nice to have something to hold us over until a stable version of Kops 1.17.4 is released. |
No description provided.
The text was updated successfully, but these errors were encountered: