Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CNI bin dir changes with K3s version #10869

Closed
brandond opened this issue Sep 9, 2024 · 3 comments
Closed

CNI bin dir changes with K3s version #10869

brandond opened this issue Sep 9, 2024 · 3 comments
Assignees

Comments

@brandond
Copy link
Member

brandond commented Sep 9, 2024

The K3s CNI binaries are installed alongside the rest of the bundled userspace, and the managed containerd config is updated on restart to point at the current bin dir under /var/lib/rancher/k3s/data/XXX/bin. This makes it difficult to install custom CNI plugins, as the path used by containerd changes every time k3s is upgraded.

This was an obstacle to our packaging Multus with K3s:

This has been complained about on Users Slack:

The thing is that Cilium installs itself in /var/lib/rancher/k3s/data/[long_id]/bin and, following a k3s upgrade, the cni gets broken as the cluster can't find the cilium-cni binary anymore, and I need to restart the cilium daemonset in order for the cluster to work again. This is why I was looking at changing the cni binary location. Otherwise, I may need to use a clusterPolicy with something like kyverno to check for a kubernetes upgrade and then restart the pods accordingly, which isn't ideal.

Steps to validate issue:

  1. Install K3s: curl -sL get.k3s.io | INSTALL_K3S_VERSION=v1.31.0+k3s1 sh -s -
  2. Deploy multus with whereabouts using legacy "current" bin dir:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: multus
  namespace: kube-system
spec:
  repo: https://rke2-charts.rancher.io
  chart: rke2-multus
  targetNamespace: kube-system
  valuesContent: |-
    manifests:
      configMap:
        true
    config:
      fullnameOverride: multus
      cni_conf:
        confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
        binDir: /var/lib/rancher/k3s/data/current/bin
        kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig
    rke2-whereabouts:
      fullnameOverride: whereabouts
      enabled: true
      cniConf:
        confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
        binDir: /var/lib/rancher/k3s/data/current/bin

3: Validate that multus and whereabouts are installed on nodes:

root@systemd-node-1:~# /var/lib/rancher/k3s/data/current/bin/multus
meta-plugin that delegates to other CNI plugins
CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0, 1.1.0

root@systemd-node-1:~# /var/lib/rancher/k3s/data/current/bin/whereabouts
whereabouts v0.8.0-8c381170 linux/amd64
CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0, 1.1.0

4: Upgrade K3s: curl -sL get.k3s.io | INSTALL_K3S_VERSION=v1.31.1+k3s1 sh -s -
5: Note that CNI binaries are missing after upgrade - the "current" directory has changed:

root@systemd-node-1:~# /var/lib/rancher/k3s/data/current/bin/multus
bash: /var/lib/rancher/k3s/data/current/bin/multus: No such file or directory

root@systemd-node-1:~# /var/lib/rancher/k3s/data/current/bin/whereabouts
bash: /var/lib/rancher/k3s/data/current/bin/whereabouts: No such file or directory

Steps to validate fix:

  1. Install K3s: curl -sL get.k3s.io | INSTALL_K3S_COMMIT=xxx sh -s -
  2. Deploy multus with whereabouts using new fixed bin dir:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: multus
  namespace: kube-system
spec:
  repo: https://rke2-charts.rancher.io
  chart: rke2-multus
  targetNamespace: kube-system
  valuesContent: |-
    manifests:
      configMap:
        true
    config:
      fullnameOverride: multus
      cni_conf:
        confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
        binDir: /var/lib/rancher/k3s/data/cni
        kubeconfig: /var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig
    rke2-whereabouts:
      fullnameOverride: whereabouts
      enabled: true
      cniConf:
        confDir: /var/lib/rancher/k3s/agent/etc/cni/net.d
        binDir: /var/lib/rancher/k3s/data/cni

3: Validate that multus and whereabouts are installed on nodes:

root@systemd-node-1:~# /var/lib/rancher/k3s/data/cni/multus
meta-plugin that delegates to other CNI plugins
CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0, 1.1.0

root@systemd-node-1:~# /var/lib/rancher/k3s/data/cni/whereabouts
whereabouts v0.8.0-8c381170 linux/amd64
CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0, 1.1.0

4: Upgrade K3s: curl -sL get.k3s.io | INSTALL_K3S_COMMIT=yyy sh -s -
5: Note that CNI binaries are still in place after the upgrade:

root@systemd-node-1:~# /var/lib/rancher/k3s/data/cni/multus
meta-plugin that delegates to other CNI plugins
CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0, 1.1.0

root@systemd-node-1:~# /var/lib/rancher/k3s/data/cni/whereabouts
whereabouts v0.8.0-8c381170 linux/amd64
CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0, 1.1.0
@brandond
Copy link
Member Author

brandond commented Oct 18, 2024

This has introduced a regression. K3s fails to start after upgrading from one commit (or release) with this change to another, due to the CNI symlinks failing to update.

root@systemd-node-1:/# curl -sL get.k3s.io | INSTALL_K3S_COMMIT=14eee80f699ad6921f847ed8366d174131266cfd sh -s -
[INFO]  Using commit 14eee80f699ad6921f847ed8366d174131266cfd as release
[INFO]  Downloading hash https://k3s-ci-builds.s3.amazonaws.com/k3s-14eee80f699ad6921f847ed8366d174131266cfd.sha256sum
[INFO]  Downloading binary https://k3s-ci-builds.s3.amazonaws.com/k3s-14eee80f699ad6921f847ed8366d174131266cfd
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

root@systemd-node-1:/# ls -la /var/lib/rancher/k3s/data/cni
total 40
drwxr-xr-x 2 root root 4096 Oct 18 19:29 .
drwxr-xr-x 4 root root 4096 Oct 18 19:29 ..
lrwxrwxrwx 1 root root   98 Oct 18 19:29 bandwidth -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 bridge -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 cni -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 firewall -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 flannel -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 host-local -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 loopback -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 portmap -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni

root@systemd-node-1:/# ls -la /var/lib/rancher/k3s/data/
total 20
drwxr-xr-x 4 root root 4096 Oct 18 19:29 .
drwxr-xr-x 5 root root 4096 Oct 18 19:29 ..
-rw------- 1 root root    0 Oct 18 19:29 .lock
drwxr-xr-x 4 root root 4096 Oct 18 19:29 5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de
drwxr-xr-x 2 root root 4096 Oct 18 19:29 cni
lrwxrwxrwx 1 root root   90 Oct 18 19:29 current -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de

root@systemd-node-1:/# curl -sL get.k3s.io | INSTALL_K3S_COMMIT=c0d661b334bc3cbe30d80e5aab6af4b92d3eb503 sh -s -
[INFO]  Using commit c0d661b334bc3cbe30d80e5aab6af4b92d3eb503 as release
[INFO]  Downloading hash https://k3s-ci-builds.s3.amazonaws.com/k3s-c0d661b334bc3cbe30d80e5aab6af4b92d3eb503.sha256sum
[INFO]  Downloading binary https://k3s-ci-builds.s3.amazonaws.com/k3s-c0d661b334bc3cbe30d80e5aab6af4b92d3eb503
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Skipping /usr/local/bin/kubectl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/crictl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, already exists
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xeu k3s.service" for details.

root@systemd-node-1:/# ls -la /var/lib/rancher/k3s/data/cni
total 40
drwxr-xr-x 2 root root 4096 Oct 18 19:29 .
drwxr-xr-x 5 root root 4096 Oct 18 19:30 ..
lrwxrwxrwx 1 root root   98 Oct 18 19:29 bandwidth -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 bridge -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 cni -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 firewall -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 flannel -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 host-local -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 loopback -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni
lrwxrwxrwx 1 root root   98 Oct 18 19:29 portmap -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de/bin/cni

root@systemd-node-1:/# ls -la /var/lib/rancher/k3s/data/
total 24
drwxr-xr-x 5 root root 4096 Oct 18 19:30 .
drwxr-xr-x 5 root root 4096 Oct 18 19:29 ..
-rw------- 1 root root    0 Oct 18 19:29 .lock
drwxr-xr-x 4 root root 4096 Oct 18 19:29 5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de
drwxr-xr-x 2 root root 4096 Oct 18 19:29 cni
lrwxrwxrwx 1 root root   90 Oct 18 19:29 current -> /var/lib/rancher/k3s/data/5a64dad4a1e1d9aea67010a4c4a0b528094dfd9561881173dbfb49ad82b324de
drwxr-xr-x 4 root root 4096 Oct 18 19:30 f6fc28de4393d6fccf4ef4b424c68ace901813237a08201a993365ebfd492baa

log shows:

Oct 18 19:30:25 systemd-node-1 systemd[1]: Starting Lightweight Kubernetes...
Oct 18 19:30:25 systemd-node-1 sh[2395]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Oct 18 19:30:25 systemd-node-1 k3s[2399]: time="2024-10-18T19:30:25Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Oct 18 19:30:25 systemd-node-1 k3s[2399]: time="2024-10-18T19:30:25Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/f6fc28de4393d6fccf4ef4b424c68ace901813237a08201a993365ebfd492baa"
Oct 18 19:30:28 systemd-node-1 k3s[2399]: time="2024-10-18T19:30:28Z" level=fatal msg="extracting data: symlink /var/lib/rancher/k3s/data/f6fc28de4393d6fccf4ef4b424c68ace901813237a08201a993365ebfd492baa/bin/cni /var/lib/rancher/k3s/data/cni/cni: file exists"
Oct 18 19:30:28 systemd-node-1 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Oct 18 19:30:28 systemd-node-1 systemd[1]: k3s.service: Failed with result 'exit-code'.

@brandond
Copy link
Member Author

brandond commented Oct 19, 2024

To test use of multus+whereabouts, you can use the following manifest. This is for a host on a 172.17.0.0/24 network and allows whereabouts to assign addresses from the top end of that range. Adjust it for the actual network your host is on, and it will assign the pod an IP on your LAN on eth1.

Note that whereabouts MUST be passed the correct paths to the configuration and kubeconfig files, by default it is hardcoded to search for these under /etc/cni/net.d/.

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: whereabouts-conf
spec:
  config: '{
      "cniVersion": "1.0.0",
      "name": "whereabouts-conf",
      "type": "macvlan",
      "master": "eth0",
      "mode": "bridge",
      "ipam": {
        "type": "whereabouts",
        "range": "172.17.0.0/24",
        "exclude": [
          "172.17.0.0/25"
        ],
        "gateway": "172.17.0.1",
        "configuration_path": "/var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.conf",
        "kubernetes": {
          "kubeconfig": "/var/lib/rancher/k3s/agent/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig"
        }
      }
    }'
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: netshoot-deployment
  labels:
    app: netshoot-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: netshoot-pod
  template:
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: whereabouts-conf
      labels:
        app: netshoot-pod
    spec:
      containers:
      - name: netshoot
        image: nicolaka/netshoot
        command:
          - sleep
          - "3600"
        imagePullPolicy: IfNotPresent

@aganesh-suse
Copy link

Closing based on #10928

@github-project-automation github-project-automation bot moved this from To Test to Done Issue in K3s Development Oct 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done Issue
Development

No branches or pull requests

2 participants