Skip to content
This repository has been archived by the owner on Oct 10, 2023. It is now read-only.

CAPD clusters utilize an alpine-based image for HA Proxy #210

Closed
2 of 9 tasks
Tracked by #1035
joshrosso opened this issue Jul 15, 2021 · 8 comments · Fixed by #244
Closed
2 of 9 tasks
Tracked by #1035

CAPD clusters utilize an alpine-based image for HA Proxy #210

joshrosso opened this issue Jul 15, 2021 · 8 comments · Fixed by #244
Assignees

Comments

@joshrosso
Copy link

Bug description

VMware has restrictions against using alpine-based images. If users create a
cluster using our docker-based (CAPD) provider an alpine-based image will be
used. A user of tanzu can spin up one of these clusters using the following
command.

CLUSTER_PLAN=dev tanzu management-cluster create -i docker

Once the management cluster is bootstrapped, the user will see an HA proxy
instance that is based off kind. This is the proxy used to front the API server.

$ docker ps | grep -i alpine

CONTAINER ID   IMAGE                                                             COMMAND                  CREATED       STATUS       PORTS
62b84a6f87ae   kindest/haproxy:2.1.1-alpine                                      "/docker-entrypoint.…"   2 hours ago   Up 2 hours   40901/tcp, 0.0.0.0:409

This image is referenced as a constant in CAPD:

https://github.com/kubernetes-sigs/cluster-api/blob/bfc6f80add5c21b8dc2b704951f42bc14708ebc4/test/infrastructure/docker/third_party/forked/loadbalancer/const.go#L19-L20

It is built inside of the kind repository.

https://github.com/kubernetes-sigs/kind/tree/main/images/haproxy

Affected product area (please put an X in all that apply)

  • APIs
  • Addons
  • CLI
  • Docs
  • Installation
  • Plugin
  • Security
  • Test and Release
  • User Experience

Version (include the SHA if the version is not obvious)

CLUSTER_PLAN=dev .local/share/tanzu-cli/tanzu-plugin-management-cluster version
v1.4.0-dev.0
@joshrosso joshrosso changed the title CAPD clusters utilize an alpine-based image for HA Proxy. CAPD clusters utilize an alpine-based image for HA Proxy Jul 15, 2021
@joshrosso
Copy link
Author

opened an issue in CAPD, requesting this to be configurable: kubernetes-sigs/cluster-api#4950

@joshrosso
Copy link
Author

i was going to open an issue in kind, but how I'm reading this slack thread it seems kind would prefer cluster-api build these images.

@dims
Copy link
Contributor

dims commented Jul 18, 2021

@joshrosso fyi kubernetes-sigs/cluster-api#4964

@joshrosso
Copy link
Author

joshrosso commented Jul 20, 2021

Hey all 👋 -- was #244 verified to fix the usage of Alpine? I'm finding Alpine to still be used. Below are my steps, let me know if I am validating this incorrectly.

  1. Cleaned gomod cache

    go clean --modcache
  2. Removed artifacts and artifacts-admin to ensure no existing binaries were present.A

    rm -rfv artifacts artifacts-admin
  3. Verified commit was present

    commit 92bf7ffbdcbd1c7955e1feafa0a2f9490826ac20
    Author: Davanum Srinivas <[email protected]>
    Date:   Tue Jul 20 10:24:39 2021 -0400
    
        Switch to non-alpine kindest/haproxy version (#244)
    
        diff with the prev version seems sane:
        https://github.com/kubernetes-sigs/cluster-api/compare/9fcfbce8e5c6...dfeb8d447bdc
    
        Alpine has several challenges, so there's a long running effort in
        upstream kubernetes to switch to distroless/debian based images.
    
        Kind recently moved to a non-alpine image, so let us please switch to
        the same as well.
        github.com/kubernetes-sigs/kind/pull/2373/commits/8f293e11855e6545789ed81dd3507fc6c8359ce8
    
        Signed-off-by: Davanum Srinivas <[email protected]>
  4. Built binaries

    make build-install-cli-local
  5. Moved them onto known-good Ubuntu VM

    scp artifacts/linux/amd64/cli/management-cluster/v1.4.0-pre-alpha-2/tanzu-management-cluster-linux_amd64  [email protected]:~/management-cluster
  6. Ubuntu VM node info

    docker info
    
    Client:
     Context:    default
     Debug Mode: false
     Plugins:
      app: Docker App (Docker Inc., v0.9.1-beta3)
      buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
      scan: Docker Scan (Docker Inc., v0.8.0)
    
    Server:
     Containers: 0
      Running: 0
      Paused: 0
      Stopped: 0
     Images: 0
     Server Version: 20.10.7
     Storage Driver: overlay2
      Backing Filesystem: extfs
      Supports d_type: true
      Native Overlay Diff: true
      userxattr: false
     Logging Driver: json-file
     Cgroup Driver: cgroupfs
     Cgroup Version: 1
     Plugins:
      Volume: local
      Network: bridge host ipvlan macvlan null overlay
      Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
     Swarm: inactive
     Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
     Default Runtime: runc
     Init Binary: docker-init
     containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
     runc version: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
     init version: de40ad0
     Security Options:
      apparmor
      seccomp
       Profile: default
     Kernel Version: 5.4.0-77-generic
     Operating System: Ubuntu 20.04.2 LTS
     OSType: linux
     Architecture: x86_64
     CPUs: 4
     Total Memory: 5.806GiB
     Name: tce
     ID: JOVM:6HDT:HBKE:UXMJ:DSDN:Z4KW:G6CR:2AJN:GQLD:LKT2:A44Y:T5NC
     Docker Root Dir: /var/lib/docker
     Debug Mode: false
     Registry: https://index.docker.io/v1/
     Labels:
     Experimental: false
     Insecure Registries:
      127.0.0.0/8
     Live Restore Enabled: false
    
    WARNING: No swap limit support
  7. Ensured the BOM/config on the VM was cleared.

    tce@tce:~$ rm -rfv ~/.config/tanzu
  8. Created CAPD cluster

    tce@tce:~$ CLUSTER_PLAN=dev ~/management-cluster create -i docker
    
  9. Once CAPD is up and creates a cluster, note the alpine image is still used.

    tce@tce:~$ docker ps
    CONTAINER ID   IMAGE                                                             COMMAND                  CREATED         STATUS                  PORTS                                NAMES
    a901ef0eb49a   kindest/haproxy:2.1.1-alpine                                      "/docker-entrypoint.…"   1 second ago    Up Less than a second   43681/tcp, 0.0.0.0:43681->6443/tcp   tkg-mgmt-docker-20210720194607-lb
    c8f11459dfb9   projects-stg.registry.vmware.com/tkg/kind/node:v1.21.2_vmware.1   "/usr/local/bin/entr…"   3 minutes ago   Up 3 minutes            127.0.0.1:36771->6443/tcp            tkg-kind-c3rig0af2ej7pp92g2tg-control-plane
  10. Validate it's actually alpine

    tce@tce:~$ docker exec -it a901 /bin/sh
    
    / # cat /etc/os-release
    NAME="Alpine Linux"
    ID=alpine
    VERSION_ID=3.10.3
    PRETTY_NAME="Alpine Linux v3.10"
    HOME_URL="https://alpinelinux.org/"
    BUG_REPORT_URL="https://bugs.alpinelinux.org/"

@joshrosso joshrosso reopened this Jul 20, 2021
@dims
Copy link
Contributor

dims commented Jul 20, 2021

@joshrosso i bet you need kind from master as well. (not just the reference mentioned in the PR issue)

@dims
Copy link
Contributor

dims commented Jul 21, 2021

@joshrosso scratch what i said about kind above ... i believe all the CAPD stuff is still v0.3.19, so that's where the image is coming from

[4812:4811 - 0:2003] 01:15:42 [dims@bigbox:/dev/pts/0 +1] ~/go/src/github.com/vmware-tanzu/tanzu-framework
$ rg v0.3.19
pkg/v1/providers/config.yaml
3:    url: providers/cluster-api/v0.3.19/core-components.yaml
21:    url: providers/bootstrap-kubeadm/v0.3.19/bootstrap-components.yaml
24:    url: providers/control-plane-kubeadm/v0.3.19/control-plane-components.yaml
27:    url: providers/infrastructure-docker/v0.3.19/infrastructure-components.yaml

pkg/v1/providers/infrastructure-docker/v0.3.19/cluster-template-definition-prod.yaml
5:    - path: providers/infrastructure-docker/v0.3.19/ytt

pkg/v1/providers/infrastructure-docker/v0.3.19/cluster-template-definition-dev.yaml
5:    - path: providers/infrastructure-docker/v0.3.19/ytt

pkg/v1/providers/infrastructure-docker/v0.3.19/infrastructure-components.yaml
916:        image: registry.tkg.vmware.run/cluster-api/capd-manager:v0.3.19_vmware.1

pkg/v1/providers/cluster-api/v0.3.19/core-components.yaml
4848:        image: registry.tkg.vmware.run/cluster-api/cluster-api-controller:v0.3.19_vmware.1
4905:        image: registry.tkg.vmware.run/cluster-api/cluster-api-controller:v0.3.19_vmware.1

pkg/v1/providers/control-plane-kubeadm/v0.3.19/control-plane-components.yaml
1502:        image: registry.tkg.vmware.run/cluster-api/kubeadm-control-plane-controller:v0.3.19_vmware.1
1546:        image: registry.tkg.vmware.run/cluster-api/kubeadm-control-plane-controller:v0.3.19_vmware.1

pkg/v1/providers/bootstrap-kubeadm/v0.3.19/bootstrap-components.yaml
3938:        image: registry.tkg.vmware.run/cluster-api/kubeadm-bootstrap-controller:v0.3.19_vmware.1
3983:        image: registry.tkg.vmware.run/cluster-api/kubeadm-bootstrap-controller:v0.3.19_vmware.1

pkg/v1/providers/tests/clustergen/bom/tkg-bom-v1.3.1-zlatest.yaml
50:    - version: v0.3.19+vmware.1
54:          tag: v0.3.19_vmware.1
57:          tag: v0.3.19_vmware.1
60:          tag: v0.3.19_vmware.1
63:          tag: v0.3.19_vmware.1

@randomvariable has a bump to v0.3.21 ( #223 ) BUT the non-alpine haproxy came after that kubernetes-sigs/cluster-api@v0.3.21...HEAD

So we will have to wait for v0.3.22 of CAPI

@randomvariable
Copy link
Contributor

#223 will probably be modified to go to v0.3.22 when @vincepri hits the release button.

@joshrosso
Copy link
Author

Today, I validated a cluster creation based on tanzu-framework v0.1.0, it was using CAPD v0.3.23.

$ artifacts/standalone-cluster/v0.7.0-fake.1/tanzu-standalone-cluster-linux_amd64 create -i docker helloworld

Validating the pre-requisites...
Identity Provider not configured. Some authentication features won't work.

Setting up standalone cluster...
Validating configuration...
Using infrastructure provider docker:v0.3.23
Generating cluster configuration...
Setting up bootstrapper...

ha proxy container

ee72a3efc2dc   kindest/haproxy:v20210715-a6da3463                                "haproxy -sf 7 -W -d…"   51 minutes ago   Up 52 minutes   36353/tcp, 0.0.0.0:36353->6443/tcp     helloworld-lb

image inspection

crane export kindest/haproxy:v20210715-a6da3463 - | tar xv
$ cat ./etc/os-release
PRETTY_NAME="Distroless"
NAME="Debian GNU/Linux"
ID="debian"
VERSION_ID="10"
VERSION="Debian GNU/Linux 10 (buster)"
HOME_URL="https://github.com/GoogleContainerTools/distroless"
SUPPORT_URL="https://github.com/GoogleContainerTools/distroless/blob/master/README.md"
BUG_REPORT_URL="https://github.com/GoogleContainerTools/distroless/issues/new"

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants