Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] kubectl won't connect #423

Open
Ka0o0 opened this issue Dec 7, 2020 · 7 comments
Open

[BUG] kubectl won't connect #423

Ka0o0 opened this issue Dec 7, 2020 · 7 comments
Assignees
Labels
help wanted Extra attention is needed not a bug Luckily this is not a bug with k3d after all ¯\_(ツ)_/¯ priority/low
Milestone

Comments

@Ka0o0
Copy link

Ka0o0 commented Dec 7, 2020

Hi,
I was trying to create a new cluster and connect via kubectl to it. I'm running on Fedora and I first had to figure out that I needed to specify the --api-port 127.0.0.1:6443 because Fedora would not allow any connection to 0.0.0.0. (edit: found #339 but issue still exists)
I'm still not sure what I'm doing wrong. Any suggestions?

Edit:
Here is the output of docker logs of the server container: server_log.txt

What did you do

  • How was the cluster created?

    • k3d cluster create -a 1 --api-port 127.0.0.1:6443
  • What did you do afterwards?

    • k3d kubeconfig merge k3s-default --switch-context --overwrite
    • kubectl get pods -A

Here the kubectl get pods -A will timeout with the following error:

image

What did you expect to happen

See the output of kubectl.

Screenshots or terminal output

image
image

➜  ~ sudo netstat -lntp | grep 6443

tcp        0      0 127.0.0.1:6443          0.0.0.0:*               LISTEN      22793/docker-proxy

Which OS & Architecture

  • Fedora 33, AMD64
  • Running behind a proxy
➜  ~ cat .kube/config   

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTURjek16VXlNelF3SGhjTk1qQXhNakEzTVRBd01ETTBXaGNOTXpBeE1qQTFNVEF3TURNMApXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTURjek16VXlNelF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSVTFiWlI3Uk8zSzhObGQyWHk0UGNGK2hBcEw1L3Z3Q2dENjlGZlVkOG8KbS9VNkwrdXhJNERKblNhSW1nN05EMjdNcmllWE5LSlIrT3k5L2N0T1dwdS9vMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXhuMlplOFFobUJBU0VUQ3ZBSlJaCm9Ramx5TVF3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnR3VGclh3WE5WaEJaTWN2SHBNbFgreTdLWDBEMUdwcnkKa3hMUXVqZENsVUFDSVFDcm5FazVUdnh3OUhMYlVlaWZZN0xuUCt6b0RuNS9MdDQwalRuUzM4enlLZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://127.0.0.1:6443
  name: k3d-k3s-default
contexts:
- context:
    cluster: k3d-k3s-default
    user: admin@k3d-k3s-default
  name: k3d-k3s-default
current-context: k3d-k3s-default
kind: Config
preferences: {}
users:
- name: admin@k3d-k3s-default
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJVDJFV0k2N1d3U3N3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOakEzTXpNMU1qTTBNQjRYRFRJd01USXdOekV3TURBek5Gb1hEVEl4TVRJdwpOekV3TURBek5Gb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJQZitwUUJrZUFJaXJQUi8KbDk3T0pjb200S0VsVkVGb2VnNzU1TXVwMlR1ZG1aem5lNjN5Vk5DUWZHUWFXaWV6TDNZUHF3RUhTKzJ0TmxIbgp4c2JrdDBtalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUlZ4blJNYXlUQWV0WDFCc0hJYTlkWHdaZmhXREFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlCcjBTOVBoOSsyVUVOc3BQY296SURUNFgrOGFuMWlRREtPY2diZWtLMVBkd0loQUxUdmFhbFMwTlYvOE9XdgplbDlpVlVKWlFPbmd3WFdNVXJvcEhXVXJMSzNuCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUyTURjek16VXlNelF3SGhjTk1qQXhNakEzTVRBd01ETTBXaGNOTXpBeE1qQTFNVEF3TURNMApXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUyTURjek16VXlNelF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUV1BUVmNCWUVPaktqWGpHbTFjK2VIOWZuRDRnYWptZnFYcW03UjJXbDAKdUp4M1dPS2lPNVhma0xUaG1XVWp1WEFMUGIvRmVwUy92aGcyUEFhSVZ2bmZvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVVZjWjBUR3Nrd0hyVjlRYkJ5R3ZYClY4R1g0Vmd3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQU9xTjJwK3BqNjhoK1NtdmxISjZUaGFYTEs2R2hNVHQKZkF4TDNOMWNSZ2JPQWlBMDRzOTdsOFhOL04vNFpOMGZ5Skl5UHdBRW82aW03cm1nL1FkZytRK3VoZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUxhelArM2IreGlJQ3VSYUx3em9yMG1LR2trTW1XL2toU3RKcldRb1ZYU09vQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFOS82bEFHUjRBaUtzOUgrWDNzNGx5aWJnb1NWVVFXaDZEdm5reTZuWk81MlpuT2Q3cmZKVQowSkI4WkJwYUo3TXZkZytyQVFkTDdhMDJVZWZHeHVTM1NRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=

Which version of k3d

k3d version v3.4.0
k3s version v1.19.4-k3s1 (default)

Which version of docker

➜  ~ docker version

Client:
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.15.1
 Git commit:        4484c46
 Built:             Fri Oct  2 19:31:30 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.15.1
  Git commit:       4484c46
  Built:            Fri Oct  2 00:00:00 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.1
  GitCommit:        
 runc:
  Version:          1.0.0-rc92+dev
  GitCommit:        c9a9ce0286785bef3f3c3c87cd1232e535a03e15
 docker-init:
  Version:          0.18.0
  GitCommit:        
➜  ~ kubectl version --client=true

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
@Ka0o0 Ka0o0 added the bug Something isn't working label Dec 7, 2020
@Ka0o0
Copy link
Author

Ka0o0 commented Dec 7, 2020

After looking into the logs, I saw that k3s is not able to pull the images. It seems that the server container is not using the proxy settings from my Docker configuration.

➜  ~ docker exec -it ad4b0e18c8e6 sh

/ # echo $http_proxy

I'll now try to get my proxy settings into the server container. There are two possibilities: either specify env variable with k3d or look why my docker configuration is ignored. Maybe K3D runs docker commands as another user.

Edit:
As suspected, the env variables for HTTP_PROXY are not applied from my ~/.docker/config.json

➜ ~ docker inspect ad4b0e18c8e6
[

    {

        "Id": "ad4b0e18c8e62cd134145ba9f25020de92bfd3fa97a13cdb722db99717fe9603",

        "Created": "2020-12-07T11:14:36.634491397Z",

        "Path": "/bin/k3s",

        "Args": [

            "server",

            "--tls-san",

            "0.0.0.0"

        ],

        "State": {

            "Status": "running",

            "Running": true,

            "Paused": false,

            "Restarting": false,

            "OOMKilled": false,

            "Dead": false,

            "Pid": 5184,

            "ExitCode": 0,

            "Error": "",

            "StartedAt": "2020-12-07T11:14:37.103113786Z",

            "FinishedAt": "0001-01-01T00:00:00Z"

        },

        "Image": "sha256:636f2028a1fb0e79ac2167ad0aae4336150fb85965d91fe6a5357a73985bf80e",

        "ResolvConfPath": "/var/lib/docker/containers/ad4b0e18c8e62cd134145ba9f25020de92bfd3fa97a13cdb722db99717fe9603/resolv.conf",

        "HostnamePath": "/var/lib/docker/containers/ad4b0e18c8e62cd134145ba9f25020de92bfd3fa97a13cdb722db99717fe9603/hostname",

        "HostsPath": "/var/lib/docker/containers/ad4b0e18c8e62cd134145ba9f25020de92bfd3fa97a13cdb722db99717fe9603/hosts",

        "LogPath": "",

        "Name": "/k3d-k3s-default-server-0",

        "RestartCount": 0,

        "Driver": "overlay2",

        "Platform": "linux",

        "MountLabel": "system_u:object_r:container_file_t:s0:c33,c361",

        "ProcessLabel": "",

        "AppArmorProfile": "",

        "ExecIDs": null,

        "HostConfig": {

            "Binds": [

                "k3d-k3s-default-images:/k3d/images"

            ],

            "ContainerIDFile": "",

            "LogConfig": {

                "Type": "journald",

                "Config": {}

            },

            "NetworkMode": "default",

            "PortBindings": {},

            "RestartPolicy": {

                "Name": "unless-stopped",

                "MaximumRetryCount": 0

            },

            "AutoRemove": false,

            "VolumeDriver": "",

            "VolumesFrom": null,

            "CapAdd": null,

            "CapDrop": null,

            "Capabilities": null,

            "Dns": null,

            "DnsOptions": null,

            "DnsSearch": null,

            "ExtraHosts": null,

            "GroupAdd": null,

            "IpcMode": "private",

            "Cgroup": "",

            "Links": null,

            "OomScoreAdj": 0,

            "PidMode": "",

            "Privileged": true,

            "PublishAllPorts": false,

            "ReadonlyRootfs": false,

            "SecurityOpt": [

                "label=disable"

            ],

            "Tmpfs": {

                "/run": "",

                "/var/run": ""

            },

            "UTSMode": "",

            "UsernsMode": "",

            "ShmSize": 67108864,

            "Runtime": "runc",

            "ConsoleSize": [

                0,

                0

            ],

            "Isolation": "",

            "CpuShares": 0,

            "Memory": 0,

            "NanoCpus": 0,

            "CgroupParent": "",

            "BlkioWeight": 0,

            "BlkioWeightDevice": null,

            "BlkioDeviceReadBps": null,

            "BlkioDeviceWriteBps": null,

            "BlkioDeviceReadIOps": null,

            "BlkioDeviceWriteIOps": null,

            "CpuPeriod": 0,

            "CpuQuota": 0,

            "CpuRealtimePeriod": 0,

            "CpuRealtimeRuntime": 0,

            "CpusetCpus": "",

            "CpusetMems": "",

            "Devices": null,

            "DeviceCgroupRules": null,

            "DeviceRequests": null,

            "KernelMemory": 0,

            "KernelMemoryTCP": 0,

            "MemoryReservation": 0,

            "MemorySwap": 0,

            "MemorySwappiness": null,

            "OomKillDisable": false,

            "PidsLimit": null,

            "Ulimits": [

                {

                    "Name": "nofile",

                    "Hard": 1024,

                    "Soft": 1024

                }

            ],

            "CpuCount": 0,

            "CpuPercent": 0,

            "IOMaximumIOps": 0,

            "IOMaximumBandwidth": 0,

            "MaskedPaths": null,

            "ReadonlyPaths": null,

            "Init": true

        },

        "GraphDriver": {

            "Data": {

                "LowerDir": "/var/lib/docker/overlay2/3a93484582e2cd95fbcca027202ddcc5002b9e7e91630f341fafc10e16b65727-init/diff:/var/lib/docker/overlay2/75cbdf764eccff8bc74cec9a90e0b45cd1108bb89665a2e977d608af8b4e38e5/diff:/var/lib/docker/overlay2/8185b36beb558f0ae6f30631c6453e3dfff2a215c4d501de473e964feeb3f7dc/diff:/var/lib/docker/overlay2/113313e87976727e896e3d26a42a83257156d93f85291e438f9ae371d6608a63/diff",

                "MergedDir": "/var/lib/docker/overlay2/3a93484582e2cd95fbcca027202ddcc5002b9e7e91630f341fafc10e16b65727/merged",

                "UpperDir": "/var/lib/docker/overlay2/3a93484582e2cd95fbcca027202ddcc5002b9e7e91630f341fafc10e16b65727/diff",

                "WorkDir": "/var/lib/docker/overlay2/3a93484582e2cd95fbcca027202ddcc5002b9e7e91630f341fafc10e16b65727/work"

            },

            "Name": "overlay2"

        },

        "Mounts": [

            {

                "Type": "volume",

                "Name": "k3d-k3s-default-images",

                "Source": "/var/lib/docker/volumes/k3d-k3s-default-images/_data",

                "Destination": "/k3d/images",

                "Driver": "local",

                "Mode": "z",

                "RW": true,

                "Propagation": ""

            },

            {

                "Type": "volume",

                "Name": "3bd595b799ab8335d498deca0b52a81791150dd0b83a49b45963729094abc57e",

                "Source": "/var/lib/docker/volumes/3bd595b799ab8335d498deca0b52a81791150dd0b83a49b45963729094abc57e/_data",

                "Destination": "/var/lib/cni",

                "Driver": "local",

                "Mode": "",

                "RW": true,

                "Propagation": ""

            },

            {

                "Type": "volume",

                "Name": "e3ba91e7e27cd203814f993b6e3c9e6f2c3e1f6828322f778530215134cb7891",

                "Source": "/var/lib/docker/volumes/e3ba91e7e27cd203814f993b6e3c9e6f2c3e1f6828322f778530215134cb7891/_data",

                "Destination": "/var/lib/kubelet",

                "Driver": "local",

                "Mode": "",

                "RW": true,

                "Propagation": ""

            },

            {

                "Type": "volume",

                "Name": "067888a1a186930f43db2ad97a17c6f37b392650e368390bf49e012ede9ea6a9",

                "Source": "/var/lib/docker/volumes/067888a1a186930f43db2ad97a17c6f37b392650e368390bf49e012ede9ea6a9/_data",

                "Destination": "/var/lib/rancher/k3s",

                "Driver": "local",

                "Mode": "",

                "RW": true,

                "Propagation": ""

            },

            {

                "Type": "volume",

                "Name": "bac2cb797d072cb3d6ab0b4056640a82b322cd925619bc6b58e78d64e4977972",

                "Source": "/var/lib/docker/volumes/bac2cb797d072cb3d6ab0b4056640a82b322cd925619bc6b58e78d64e4977972/_data",

                "Destination": "/var/log",

                "Driver": "local",

                "Mode": "",

                "RW": true,

                "Propagation": ""

            }

        ],

        "Config": {

            "Hostname": "k3d-k3s-default-server-0",

            "Domainname": "",

            "User": "",

            "AttachStdin": false,

            "AttachStdout": false,

            "AttachStderr": false,

            "Tty": false,

            "OpenStdin": false,

            "StdinOnce": false,

            "Env": [

                "K3S_TOKEN=iQVQuhThRfglbyMlVaKx",

                "K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml",

                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/bin/aux"

            ],

            "Cmd": [

                "server",

                "--tls-san",

                "0.0.0.0"

            ],

            "Image": "docker.io/rancher/k3s:v1.19.4-k3s1",

            "Volumes": {

                "/var/lib/cni": {},

                "/var/lib/kubelet": {},

                "/var/lib/rancher/k3s": {},

                "/var/log": {}

            },

            "WorkingDir": "",

            "Entrypoint": [

                "/bin/k3s"

            ],

            "OnBuild": null,

            "Labels": {

                "app": "k3d",

                "k3d.cluster": "k3s-default",

                "k3d.cluster.imageVolume": "k3d-k3s-default-images",

                "k3d.cluster.network": "245c252189899a3776faeaf98c7abdc72ddafd0d6abeb3ec2c54c83d97c7efca",

                "k3d.cluster.network.external": "false",

                "k3d.cluster.token": "iQVQuhThRfglbyMlVaKx",

                "k3d.cluster.url": "https://k3d-k3s-default-server-0:6443",

                "k3d.role": "server",

                "k3d.server.api.host": "0.0.0.0",

                "k3d.server.api.hostIP": "0.0.0.0",

                "k3d.server.api.port": "37977",

                "org.label-schema.build-date": "2020-11-18T22:13:15Z",

                "org.label-schema.schema-version": "1.0",

                "org.label-schema.vcs-ref": "2532c10faad43e2b6e728fdcc01662dc13d37764",

                "org.label-schema.vcs-url": "https://github.com/rancher/k3s.git"

            }

        },

        "NetworkSettings": {

            "Bridge": "",

            "SandboxID": "7313443afb40ee5c209d8572aaefd0e5c68da7cee7dfcc71532d37f1a0fdff1a",

            "HairpinMode": false,

            "LinkLocalIPv6Address": "",

            "LinkLocalIPv6PrefixLen": 0,

            "Ports": {},

            "SandboxKey": "/var/run/docker/netns/7313443afb40",

            "SecondaryIPAddresses": null,

            "SecondaryIPv6Addresses": null,

            "EndpointID": "",

            "Gateway": "",

            "GlobalIPv6Address": "",

            "GlobalIPv6PrefixLen": 0,

            "IPAddress": "",

            "IPPrefixLen": 0,

            "IPv6Gateway": "",

            "MacAddress": "",

            "Networks": {

                "k3d-k3s-default": {

                    "IPAMConfig": null,

                    "Links": null,

                    "Aliases": [

                        "ad4b0e18c8e6"

                    ],

                    "NetworkID": "245c252189899a3776faeaf98c7abdc72ddafd0d6abeb3ec2c54c83d97c7efca",

                    "EndpointID": "66bb8d71375482ac9e7d3c858f0d9a540a25e426426c0050e80b37a7380f2afb",

                    "Gateway": "172.19.0.1",

                    "IPAddress": "172.19.0.2",

                    "IPPrefixLen": 16,

                    "IPv6Gateway": "",

                    "GlobalIPv6Address": "",

                    "GlobalIPv6PrefixLen": 0,

                    "MacAddress": "02:42:ac:13:00:02",

                    "DriverOpts": null

                }

            }

        }

    }

]

@Ka0o0
Copy link
Author

Ka0o0 commented Dec 7, 2020

Okay, after setting the proxy settings via k3d cluster create -e HTTP_PROXY=xxx I can see in my proxy's logs, that images are being pulled. But was still not able to log in. Other things I noticed were the following logs from the service lb:

2020/12/07 12:13:06 [error] 16#16: *23 connect() failed (113: Host is unreachable) while connecting to upstream, client: 172.22.0.1, server: 0.0.0.0:6443, upstream: "172.22.0.2:6443", bytes from/to client:0/0, bytes from/to upstream:0/0
2020/12/07 12:13:07 [error] 16#16: *25 connect() failed (113: Host is unreachable) while connecting to upstream, client: 172.22.0.1, server: 0.0.0.0:6443, upstream: "172.22.0.2:6443", bytes from/to client:0/0, bytes from/to upstream:0/0

I first thought that K3S probably didn't start but I logged into the container of the server with docker exec and used kubectl there. And it worked:

/ # kubectl get pods -A

NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE
kube-system   metrics-server-7b4f8b595-996f6           1/1     Running     0          8m14s
kube-system   local-path-provisioner-7ff9579c6-5bx6f   1/1     Running     0          8m14s
kube-system   coredns-66c464876b-vzctl                 1/1     Running     0          8m14s
kube-system   helm-install-traefik-kl6dj               0/1     Completed   0          8m14s
kube-system   svclb-traefik-bgq9d                      2/2     Running     0          7m6s
kube-system   traefik-5dd496474-2pwg2                  1/1     Running     0          7m6s

/ # kubectl get nodes -A

NAME                       STATUS   ROLES    AGE     VERSION
k3d-k3s-default-server-0   Ready    master   8m28s   v1.19.4+k3s1

So why, is my service lb not able to connect to K3S. Running netstat yields:

/ # netstat -tln

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:31205           0.0.0.0:*               LISTEN      
tcp        0      0 0.0.0.0:31975           0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.11:40203        0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.1:6444          0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.1:10256         0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.1:10010         0.0.0.0:*               LISTEN      
tcp        0      0 :::10250                :::*                    LISTEN      
tcp        0      0 :::6443                 :::*                    LISTEN      

So I logged into my lb container and install ncat. I already ran a ping test towards my k3s container, which worked, now I tried to establish a TCP connection using ncat. And this is where I ran into a problem. ncat -v 172.22.0.2 6443 yields hoch not reachable. So I guess that I need to fix this problem.

@Ka0o0
Copy link
Author

Ka0o0 commented Dec 7, 2020

So the final solution to this problem is to not forget to add the new bridge connector, K3D creates, to the trusted zone of your firewall. On fedora running firewall-cmd --permanent --zone=trusted --change-interface=br-266f7145dcdc && firewall-cmd --reload solved my problem.

I'm not sure if you consider this works as intended but I think it would be nice if I would have gotten a reminder to configure the firewall.

@iwilltry42 iwilltry42 self-assigned this Jan 6, 2021
@iwilltry42
Copy link
Member

Hi @Ka0o0 , thanks for opening this issue and doing all the research!
Replying to the two things you mentioned so far:

  1. HTTP_PROXY settings from the docker config: nothing done on k3d side there to prevent this
  2. firewall setting: this seems to be a Fedora thing (at least I never faced this issue before), but what would you expect from k3d here? The only thing it could do would be to log some message telling you to add the interface to your firewall, right? That would be invalid for most other users though, so we'd need logic to figure out your system and the firewall that you use and if it does not integrate with docker automatically. Any suggestion? 🤔

@Ka0o0
Copy link
Author

Ka0o0 commented Jan 25, 2021

As I just found out, it's actually not specific to Fedora. All Linux OSs using nftables have this problem. The problem is also described here: https://fedoraproject.org/wiki/Changes/firewalld_default_to_nftables#Scope
I think it's not a problem with K3D but it's a missing feature with moby (moby/moby#26824). Ideally, it should integrate with nftables and automatically add the correct rules when creating new interfaces.

Now in terms of usability: The thing is, that most of the time I don't need to care about the firewalld settings as I rarely use the bridge functionality. But K3D does under the hood create bridges, so maybe it could be checked if nftables was activated and if so, it could print a warning or a reminder to add the new bridge as a trusted network?

@iwilltry42
Copy link
Member

Thanks for your investigations here @Ka0o0 !
This is odd. If you can come up with a simple way to check this, so that we don't need to many LoC, I'll happily integrate a check + warning 👍
How do you go about checking if nftables could introduce a problem here?

@iwilltry42 iwilltry42 added help wanted Extra attention is needed not a bug Luckily this is not a bug with k3d after all ¯\_(ツ)_/¯ priority/low and removed bug Something isn't working labels Feb 5, 2021
@iwilltry42 iwilltry42 added this to the Backlog milestone Feb 5, 2021
@mindkeep
Copy link

mindkeep commented May 5, 2021

I found it!

The thing that tipped this into being functional was adding 0.0.0.0 to my NO_PROXY env. Currently I have:
export NO_PROXY="127.0.0.1,::1,localhost,.cluster.local,10.0.0.0/8,192.168.0.0/16,0.0.0.0" # plus some internal things

For reference:

% k3d version
k3d version v4.4.3
k3s version v1.20.6-k3s1 (default)

As I understand things the 10.0.0.0/8 and 0.0.0.0 are the once most likely needed for k3d. I also pass in a k3s_registry.yml that looks something like:

% cat k3s_registry.yaml
---
mirrors:
  docker.io:
    endpoint:
    - https://docker.repo.internal
    - https://sf-artifactory.internal:9001
  configs:
    "sf-artifactory.internal:9001":
      insecure_skip_verify: true

(Obviously the above should be adjusted for your endpoints).

to create the cluster I used:
% k3d cluster create --registry-config k3s_registry.yaml --servers 3 --agents 3 test

I didn't need to pass -e HTTP_PROXY -e NO_PROXY (though maybe these might be useful later?). The root issue was that kubectl was hitting the proxy server for 0.0.0.0 when it shouldn't have been.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed not a bug Luckily this is not a bug with k3d after all ¯\_(ツ)_/¯ priority/low
Projects
None yet
Development

No branches or pull requests

3 participants