Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LoadBalancer assigned Docker bridge IP, inaccessible externally #162

Closed
zanbaldwin opened this issue Mar 5, 2019 · 5 comments
Closed

LoadBalancer assigned Docker bridge IP, inaccessible externally #162

zanbaldwin opened this issue Mar 5, 2019 · 5 comments

Comments

@zanbaldwin
Copy link

zanbaldwin commented Mar 5, 2019

Describe the bug

Following the docker-compose.yaml installation method for K3s, the Traefik LoadBalancer is assigned an IP of the Docker bridge network for EXTERNAL-IP but never binds the ports from containerd to the host machine making it inaccessible from the outside world.

It's possibly related to #72, but I don't know enough to be sure.

Reproducible Steps

  • Bring up server and node using Docker Compose (see docker-compose.yaml file below).
  • Fix DNS resolution bug by replacing CoreDNS proxy value 1.1.1.1 with 8.8.8.8:
    • kubectl -n kube-system get configmap coredns -o json | sed -e 's/1.1.1.1/8.8.8.8/g' | kubectl -n kube-system replace -f -
  • Wait for IP address to be assigned to the Traefik LoadBalancer
  • kubectl apply -f whoami.yaml

Expected behavior

The LoadBalancer to be assigned an IP address from a network interface of the host, rather than a bridge network - so that services can be accessed through ports 80/443 on the host.

Additional context

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.6 LTS
Release:        16.04
Codename:       xenial

docker version

Client:
 Version:           18.09.3
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        774a1f4
 Built:             Thu Feb 28 06:40:58 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.3
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       774a1f4
  Built:            Thu Feb 28 05:59:55 2019
  OS/Arch:          linux/amd64
  Experimental:     false

cat docker-compose.yaml

version: '3.7'

x-logging:
    &default-logging
    driver: "json-file"
    options:
        max-size: "5m"
        max-file: "1"

services:
    server:
        image: 'rancher/k3s:${K3S_VERSION:-v0.1.0}'
        restart: 'unless-stopped'
        environment:
            K3S_CLUSTER_SECRET: '${K3S_CLUSTER_SECRET:-SuperSecretPassword}'
            K3S_KUBECONFIG_OUTPUT: '/output/config'
            K3S_KUBECONFIG_MODE: '0666'
        volumes:
            -   type: 'bind'
                source: '/var/lib/rancher/k3s'
                target: '/var/lib/rancher/k3s'
            -   type: 'bind'
                source: '${HOME}/.kube'
                target: '/output'
        ports:
            -   '6443:6443'
        command: [ 'server', '--disable-agent' ]
        logging: *default-logging
    node:
        image: 'rancher/k3s:${K3S_VERSION:-v0.1.0}'
        restart: 'unless-stopped'
        depends_on: [ 'server' ]
        environment:
            K3S_CLUSTER_SECRET: '${K3S_CLUSTER_SECRET:-SuperSecretPassword}'
            K3S_URL: 'https://server:6443'
        privileged: true
        tmpfs:
            -   '/run'
            -   '/var/run'
        command: [ 'agent' ]
        logging: *default-logging

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:0e:c7:58 brd ff:ff:ff:ff:ff:ff
    inet 10.249.106.10/24 brd 10.249.106.255 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe0e:c758/64 scope link 
       valid_lft forever preferred_lft forever
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:2a:f1:65 brd ff:ff:ff:ff:ff:ff
    inet 172.31.1.229/16 brd 172.31.255.255 scope global ens4
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe2a:f165/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:c4:97:3b:fa brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:c4ff:fe97:3bfa/64 scope link 
       valid_lft forever preferred_lft forever
43: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default 
    link/ether 02:8d:91:b4:fa:72 brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.0/32 brd 10.42.0.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::8d:91ff:feb4:fa72/64 scope link 
       valid_lft forever preferred_lft forever
...
165: br-78dd6863f70e: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:e7:b7:7b:57 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.1/16 brd 172.20.255.255 scope global br-78dd6863f70e
       valid_lft forever preferred_lft forever
    inet6 fe80::42:e7ff:feb7:7b57/64 scope link 
       valid_lft forever preferred_lft forever
...

cat whoami.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    name: whoami-deployment
spec:
    replicas: 1
    selector:
        matchLabels:
            app: whoami
    template:
        metadata:
            labels:
                app: whoami
        spec:
            containers:
                - name: whoami-container
                  image: containous/whoami
---
apiVersion: v1
kind: Service
metadata:
    name: whoami-service
spec:
    ports:
        -   name: http
            targetPort: 80
            port: 80
    selector:
        app: whoami
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    name: whoami-ingress
    annotations:
        kubernetes.io/ingress.class: traefik
spec:
    rules:
        -   host: whoami.example.com
            http:
                paths:
                    -   path: /
                        backend:
                            serviceName: whoami-service
                            servicePort: http

kubectl -n kube-system get all | grep -v Terminating

NAMESPACE     NAME                                        READY   STATUS      RESTARTS   AGE
default       pod/whoareyou-deployment-85759b8dc6-g5b6c   1/1     Running     0          7m44s
default       pod/whoareyou-deployment-85759b8dc6-rf6k6   1/1     Running     0          7m44s
kube-system   pod/coredns-7748f7f6df-tbqk2                1/1     Running     0          14m
kube-system   pod/helm-install-traefik-qfxtd              0/1     Completed   5          14m
kube-system   pod/svclb-traefik-5ccc7696bf-pbbkk          2/2     Running     0          11m
kube-system   pod/traefik-6876857645-2fsg2                1/1     Running     0          11m

NAMESPACE     NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
default       service/kubernetes          ClusterIP      10.43.0.1       <none>        443/TCP                      14m
default       service/whoareyou-service   ClusterIP      10.43.239.115   <none>        80/TCP                       7m44s
kube-system   service/kube-dns            ClusterIP      10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP       14m
kube-system   service/traefik             LoadBalancer   10.43.175.200   172.20.0.3    80:31730/TCP,443:30754/TCP   11m

NAMESPACE     NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/whoareyou-deployment   2/2     2            2           7m44s
kube-system   deployment.apps/coredns                1/1     1            1           14m
kube-system   deployment.apps/svclb-traefik          1/1     1            1           11m
kube-system   deployment.apps/traefik                1/1     1            1           11m

NAMESPACE     NAME                                              DESIRED   CURRENT   READY   AGE
default       replicaset.apps/whoareyou-deployment-85759b8dc6   2         2         2       7m44s
kube-system   replicaset.apps/coredns-7748f7f6df                1         1         1       14m
kube-system   replicaset.apps/svclb-traefik-5ccc7696bf          1         1         1       11m
kube-system   replicaset.apps/traefik-6876857645                1         1         1       11m

NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           3m14s      14m

curl -X GET -H "Host: whoami.example.com" "http://172.20.0.3"

Hostname: whoami-deployment-85759b8dc6-rf6k6
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.7
IP: fe80::483a:bfff:fea7:8572
GET / HTTP/1.1
Host: whoami.example.com
User-Agent: curl/7.47.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.42.0.5
X-Forwarded-Host: whoareyou.zchem.uk
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-6876857645-2fsg2
X-Real-Ip: 10.42.0.5

Making request curl -X GET -H "Host: whoami.example.com" "http://${EXTERNAL_IP}" is never able to connect.

@ibuildthecloud
Copy link
Contributor

ibuildthecloud commented Mar 5, 2019

curl -X GET -H "Host: whoami.example.com" "http://${EXTERNAL_IP}" where EXTERNAL_IP is 172.20.0.3 should work if your host is Linux and not macOS (no clue about windows). One issue you could have is that if you are using docker-compose is if the node is rebuilt you'll see from kubectl get node that there are multiple nodes. This is because the hostname/node.name is random so you get a new node per container. So k8s still thinks the container is running on the old node and thus the EXTERNAL_IP is not actually correct. To fix delete kubectl delete node X the old nodes. To fix this you can set hostname: node-foo in the node definition. This makes the hostname stable but doesn't help with scaling (because you'll have X number of nodes with the same name). Maybe there is a docker-compose syntax to have generated fixed hostname I'm not aware of.

If you are running on macOS or you just want the port to be locally accessible then you have to create port bindings on the node service to map each individual port you want exposed. like

services:
  node:
    ports:
    - 1234:80

That will map localhost:1234 to 80, which is the ingress load balancer (traefix). So now curl -X GET -H "Host: whoami.example.com" "http://localhost:1234" should work.

Below is what I tested on my laptop (running ubuntu 18.04 and docker 18.09.1) and it worked

version: '3'
services:
  server:
    image: rancher/k3s:v0.1.0
    command: server --disable-agent
    environment:
    - K3S_CLUSTER_SECRET=somethingtotallyrandom
    - K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
    - K3S_KUBECONFIG_MODE=666
    volumes:
    - k3s-server:/var/lib/rancher/k3s
    # This is just so that we get the kubeconfig file out
    - .:/output
    ports:
    - 6443:6443

  node:
    image: rancher/k3s:v0.1.0
    hostname: node1
    tmpfs:
    - /run
    - /var/run
    ports:
    - 1234:80
    privileged: true
    environment:
    - K3S_URL=https://server:6443
    - K3S_CLUSTER_SECRET=somethingtotallyrandom

volumes:
  k3s-server: {}

docker-compose up -d


NAMESPACE     NAME                                 READY     STATUS      RESTARTS   AGE
kube-system   pod/coredns-7748f7f6df-4bk2f         1/1       Running     2          1h
kube-system   pod/helm-install-traefik-8sfxl       0/1       Completed   0          1h
kube-system   pod/svclb-traefik-55b78dfd4f-6xwvw   2/2       Running     6          1h
kube-system   pod/traefik-6876857645-zw97s         1/1       Running     2          1h

NAMESPACE     NAME                 TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
default       service/kubernetes   ClusterIP      10.43.0.1      <none>        443/TCP                      1h
kube-system   service/kube-dns     ClusterIP      10.43.0.10     <none>        53/UDP,53/TCP,9153/TCP       1h
kube-system   service/traefik      LoadBalancer   10.43.221.27   172.20.0.3    80:31283/TCP,443:31654/TCP   1h

NAMESPACE     NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns         1         1         1            1           1h
kube-system   deployment.apps/svclb-traefik   1         1         1            1           1h
kube-system   deployment.apps/traefik         1         1         1            1           1h

NAMESPACE     NAME                                       DESIRED   CURRENT   READY     AGE
kube-system   replicaset.apps/coredns-7748f7f6df         1         1         1         1h
kube-system   replicaset.apps/svclb-traefik-55b78dfd4f   1         1         1         1h
kube-system   replicaset.apps/traefik-6876857645         1         1         1         1h

NAMESPACE     NAME                             DESIRED   SUCCESSFUL   AGE
kube-system   job.batch/helm-install-traefik   1         1            1h
$ curl http://localhost:1234
404 page not found
$ curl http://172.20.0.3
404 page not found

Even though it says 404 page not found it is working, I just don't have any ingress defined right now.

@zanbaldwin
Copy link
Author

Sorry, I realise I didn't explain the last part well:

  • Worked: curl -X GET -H "Host: whoami.example.com" http://172.20.0.3
  • Didn't Work: curl http://whoami.example.com

Because I couldn't see the containers being run by K3s/containerd I was mistaken about where they were running (I thought the privileged agent container was running them on the host).

Since the service/traefik manifest is created by default for K3s and it exposes the ports 80 and 443, perhaps it would be a good idea to add the following in the example docker-compose.yml:

services:
  node:
    ...
    ports:
    - 80:80
    - 443:443

Sysadmin/DevOps is not my speciality, but I'm finding K3s is an amazing learning tool 🙂

@ibuildthecloud
Copy link
Contributor

I don't have a great fix for this yet. This is bigger kubernetes ingress issues in that ingress routes based on hostname so you need to setup hosts. Personally what I do for development is run ngrok. ngrok (if you haven't heard of it) gives you a public URL like http://10657ace.ngrok.io. So just run

ngrok http ${EXTERNAL_IP}:80

and then whatever hostname they give you put that as the hostname in the ingress.

A final approach would just be to not put a hostname at all in your ingress definitions. This is bad for multitenancy but for development it should just route all the traffic and ignore whatever hostname you use.

@zanbaldwin
Copy link
Author

Closing issue. I think the issue I'm having is a misunderstanding with Kubernetes, rather than anything K3s specific.

@goffinf
Copy link

goffinf commented Mar 31, 2019

I can confirm that following the approach describe by @ibuildthecloud it is possible to reach a service deployed on k3s via ingress.

k3s version: 0.3.0
platform : Windows Subsystem for Linux (WSL)
k3s: impln: docker-compose
Service Exposed: nginx
Service (host) port: 8081
Service container (targetPort) port: 80

curl -i http://localhost:8081/
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 612
Content-Type: text/html
Date: Sun, 31 Mar 2019 20:36:11 GMT
Etag: "5c9a3176-264"
Last-Modified: Tue, 26 Mar 2019 14:04:38 GMT
Server: nginx/1.15.10
Vary: Accept-Encoding

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

docker-compose.yaml (note: only a SINGLE node - notice the port mapping 8081:80 - host:ingress controller)

version: '3'
services:
  server:
    image: rancher/k3s:v0.3.0
    command: server --disable-agent
    environment:
    - K3S_CLUSTER_SECRET=somethingtotallyrandom
    - K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
    - K3S_KUBECONFIG_MODE=666
    volumes:
    - k3s-server:/var/lib/rancher/k3s
    # This is just so that we get the kubeconfig file out
    - .:/output
    ports:
    - 6443:6443

  node:
    image: rancher/k3s:v0.3.0
    hostname: node1
    tmpfs:
    - /run
    - /var/run
    privileged: true
    depends_on:
    - server
    ports:
    - 8081:80
    environment:
    - K3S_URL=https://server:6443
    - K3S_CLUSTER_SECRET=somethingtotallyrandom
    # Can also use K3S_TOKEN from /var/lib/rancher/k3s/server/node-token instead of K3S_CLUSTER_SECRET
    #- K3S_TOKEN=K13849a67fc385fd3c0fa6133a8649d9e717b0258b3b09c87ffc33dae362c12d8c0::node:2e373dca319a0525745fd8b3d8120d9c

volumes:
  k3s-server: {}

nginx-demo-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-demo
  labels:
    app: nginx-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
      - name: nginx-demo
        image: nginx:latest
        ports:
        - containerPort: 80

nginx-demo-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
  labels:
    name: nginx-demo
spec:
  ports:
    - port: 8081
      targetPort: 80
      name: http
  selector:
    app: nginx-demo

nginx-demo-ing

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-demo
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx-demo
          servicePort: 8081

HTHs

Fraser.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants