You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Images can be pulled fine using Moby from our internally hosted registry and externally.
When starting Kubernetes v1.25.3, seeing TLS handshake issues when trying to connect to the API server.
Steps to Reproduce
Download rancher
Setup proxies in init.d before starting docker
Launcher Kubernetes via Rancher UI
Result
background.log
2023-01-13T13:40:39.854Z: Kubernetes was unable to start: Error: Client network socket disconnected before secure TLS connection was established
at connResetException (node:internal/errors:691:14)
at TLSSocket.onConnectEnd (node:_tls_wrap:1585:19)
at TLSSocket.emit (node:events:402:35)
at endReadableNT (node:internal/streams/readable:1343:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
code: 'ECONNRESET',
path: null,
host: '172.28.91.117',
port: '6443',
localAddress: undefined
k8s.log
2023-01-13T13:33:11.006Z: Updating release version cache with 122 items in cache
2023-01-13T13:33:14.264Z: Found old version v1.26.0+k3s2, stopping.
2023-01-13T13:33:14.266Z: Got 122 versions.
2023-01-13T13:33:17.086Z: Ensuring images available for K3s 1.25.3
2023-01-13T13:33:23.097Z: Cache at C:\Users\foo\AppData\Local\rancher-desktop\cache\k3s is valid.
2023-01-13T13:33:57.966Z: Waiting for K3s server to be ready on port 6443...
2023-01-13T13:34:21.536Z: Error: Client network socket disconnected before secure TLS connection was established
2023-01-13T13:34:22.479Z: Updating kubeconfig C:\Users\foo\.kube\config...
2023-01-13T13:39:36.540Z: Waited more than 300 secs for kubernetes to fully start up. Giving up.
2023-01-13T13:40:39.754Z: Error priming kuberlr: Error: C:\Users\foo\AppData\Local\Programs\Rancher Desktop\resources\resources\win32\bin\kubectl.exe exited with code 1
2023-01-13T13:40:39.754Z: Output from kuberlr:
ex.stdout: [
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
],
ex.stderr: [I0113 14:39:41.919631 31820 versioner.go:56] Remote kubernetes server unreachable
Unable to connect to the server: EOF
]
2023-01-13T13:40:39.754Z: Failed to match a kuberlr network access issue.
k3s.log
time="2023-01-13T13:34:06Z" level=info msg="Connecting to proxy" url="wss://172.28.91.117:6443/v1-k3s/connect"
time="2023-01-13T13:34:06Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1673592565: notBefore=2023-01-13 06:49:25 +0000 UTC notAfter=2024-01-13 13:34:06 +0000 UTC"
time="2023-01-13T13:34:06Z" level=error msg="Failed to connect to proxy. Empty dialer response" error="x509: certificate is valid for 10.43.0.1, 127.0.0.1, 172.28.93.150, 172.28.94.92, ::1, not 172.28.91.117"
time="2023-01-13T13:34:06Z" level=error msg="Remotedialer proxy error" error="x509: certificate is valid for 10.43.0.1, 127.0.0.1, 172.28.93.150, 172.28.94.92, ::1, not 172.28.91.117"
time="2023-01-13T13:34:06Z" level=info msg="Updating TLS secret for kube-system/k3s-serving (count: 12): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.28.91.117:172.28.91.117 listener.cattle.io/cn-172.28.93.150:172.28.93.150 listener.cattle.io/cn-172.28.94.92:172.28.94.92 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-parmd2233232:parmd2233232 listener.cattle.io/fingerprint:SHA1=627979D9F0A1695DFED6ECB123756C11AC366C3B]"
time="2023-01-13T13:34:06Z" level=info msg="Active TLS secret kube-system/k3s-serving (ver=2091) (count 12): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.28.91.117:172.28.91.117 listener.cattle.io/cn-172.28.93.150:172.28.93.150 listener.cattle.io/cn-172.28.94.92:172.28.94.92 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-parmd2233232:parmd2233232 listener.cattle.io/fingerprint:SHA1=627979D9F0A1695DFED6ECB123756C11AC366C3B]"
time="2023-01-13T13:34:19Z" level=info msg="Connecting to proxy" url="wss://172.28.94.92:6443/v1-k3s/connect"
I0113 13:34:20.324416 907 trace.go:205] Trace[1481556806]: "Proxy via http_connect protocol over tcp" address:10.42.0.17:10250 (13-Jan-2023 13:34:17.208) (total time: 3115ms):
Trace[1481556806]: [3.115638962s] [3.115638962s] END
I0113 13:34:20.324419 907 trace.go:205] Trace[319025449]: "Proxy via http_connect protocol over tcp" address:10.42.0.17:10250 (13-Jan-2023 13:34:17.208) (total time: 3115ms):
Trace[319025449]: [3.115682931s] [3.115682931s] END
I0113 13:34:20.324417 907 trace.go:205] Trace[1161951928]: "Proxy via http_connect protocol over tcp" address:10.42.0.17:10250 (13-Jan-2023 13:34:17.208) (total time: 3115ms):
Trace[1161951928]: [3.115553385s] [3.115553385s] END
I0113 13:34:20.324428 907 trace.go:205] Trace[1878271579]: "Proxy via http_connect protocol over tcp" address:10.42.0.17:10250 (13-Jan-2023 13:34:17.208) (total time: 3115ms):
Trace[1878271579]: [3.115653726s] [3.115653726s] END
I0113 13:34:20.324448 907 trace.go:205] Trace[551097359]: "Proxy via http_connect protocol over tcp" address:10.42.0.17:10250 (13-Jan-2023 13:34:17.208) (total time: 3115ms):
Trace[551097359]: [3.115746432s] [3.115746432s] END
E0113 13:34:20.326393 907 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.42.0.17:10250/apis/metrics.k8s.io/v1beta1: Get "https://10.42.0.17:10250/apis/metrics.k8s.io/v1beta1": proxy error from 127.0.0.1:6443 while dialing 10.42.0.17:10250, code 503: 503 Service Unavailable
W0113 13:34:21.330618 907 handler_proxy.go:105] no RequestInfo found in the context
W0113 13:34:21.330620 907 handler_proxy.go:105] no RequestInfo found in the context
E0113 13:34:21.332308 907 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0113 13:34:21.333826 907 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0113 13:34:21.332957 907 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0113 13:34:21.335966 907 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0113 13:34:21.635902 907 lease.go:250] Resetting endpoints for master service "kubernetes" to [172.28.91.117]
time="2023-01-13T13:34:21Z" level=info msg="Stopped tunnel to 172.28.94.92:6443"
time="2023-01-13T13:34:21Z" level=error msg="Failed to connect to proxy. Empty dialer response" error="dial tcp 172.28.94.92:6443: operation was canceled"
time="2023-01-13T13:34:21Z" level=error msg="Remotedialer proxy error" error="dial tcp 172.28.94.92:6443: operation was canceled"
@rumstead , This looks like a duplicate of other issues you have commented on, for example #3428
However, Did you try configuring proxy settings for k3s as described on this page? You can use provisioning scripts to add proxy settings to /etc/conf.d/k3s.
Yea I was talking in slack and it was suggested to open a ticket. I set up proxies via provisioning and in the Windows User Only section you can see the steps that I took.
The issue is with zscaler's transparent proxy. It doesn't respect things like "no_proxy" and is proxying out requests over the 192.x.x.x IP. Going to close this issue.
Actual Behavior
Images can be pulled fine using Moby from our internally hosted registry and externally.
When starting Kubernetes v1.25.3, seeing TLS handshake issues when trying to connect to the API server.
Steps to Reproduce
Result
background.log
k8s.log
k3s.log
background.log
cri-dockerd.log
docker.log
k3s.log
k8s.log
wsl-helper.log
wsl.log
Expected Behavior
Kubernetes is able to start up
Additional Information
The IP is different than the logs because the openssl was taken after a couple of restarts of Rancher.
Rancher Desktop Version
1.7.0
Rancher Desktop K8s Version
v1.25.3
Which container engine are you using?
moby (docker cli)
What operating system are you using?
Windows
Operating System / Build Version
Windows 10
What CPU architecture are you using?
x64
Linux only: what package format did you use to install Rancher Desktop?
None
Windows User Only
Zscaler
Setting proxies via a provisioning script for the container runtime as described here.
The text was updated successfully, but these errors were encountered: