Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure that no embedded controllers are using the admin RBAC #7212

Closed
brandond opened this issue Apr 4, 2023 · 1 comment
Closed

Ensure that no embedded controllers are using the admin RBAC #7212

brandond opened this issue Apr 4, 2023 · 1 comment
Assignees
Labels
kind/enhancement An improvement to existing functionality priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@brandond
Copy link
Member

brandond commented Apr 4, 2023

K3s tracking issue for:

We're not going to be changing anything about the default admin kubeconfig, but we DO need to address this bit:

We can take a pass at auditing any controllers using the admin creds, there should not be any but it seems some have perhaps slipped in.

In particular, it appears that the deploy controller is using the system:admin user account and system:masters group instead of a dedicated service account. We should ensure that the deploy controller uses a separate account for audit purposes. This may require additional RBAC to accommodate.

Specifically, it looks like we're using the admin kubeconfig for all the embedded controllers - of which the deploy controller is the most visible:

sc, err := NewContext(ctx, controlConfig.Runtime.KubeConfigAdmin)

We're also using it for the etcd snapshot CLI, although that may be more forgivable:

sc, err := server.NewContext(ctx, serverConfig.ControlConfig.Runtime.KubeConfigAdmin)

There is a bit of a chicken-and-egg problem with bootstrapping RBAC, as we need an admin account to create additional RBAC, but only the built-in Kubernetes RBAC is available when the cluster is first started. That will need to be worked through as part of resolving this issue.

Ref: SURE-6081

@est-suse
Copy link
Contributor

est-suse commented Jun 16, 2023

Validated on Commit id 4e1ba3a

k3s version v1.27.3-rc1+k3s1 (fe9604ca)
go version go1.20.5
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
NAME               STATUS   ROLES                       AGE   VERSION
ip-172-31-23-207   Ready    control-plane,etcd,master   16h   v1.27.3-rc1+k3s1

There is a new "supervisor.kubeconfig" which was not there in previous version

admin.kubeconfig  api-server.kubeconfig  cloud-controller.kubeconfig  controller.kubeconfig  ipsec.psk	passwd	scheduler.kubeconfig  supervisor.kubeconfig

Additional certs for supervisor which were not in previous version:

client-admin.crt       client-auth-proxy.key  client-ca.nochain.crt  client-k3s-cloud-controller.crt  client-k3s-controller.key  client-kube-proxy.crt	client-scheduler.crt   client-supervisor.key  request-header-ca.crt  server-ca.key	    service.key			serving-kubelet.key
client-admin.key       client-ca.crt	      client-controller.crt  client-k3s-cloud-controller.key  client-kube-apiserver.crt  client-kube-proxy.key	client-scheduler.key   dynamic-cert.json      request-header-ca.key  server-ca.nochain.crt  serving-kube-apiserver.crt	temporary-certs
client-auth-proxy.crt  client-ca.key	      client-controller.key  client-k3s-controller.crt	      client-kube-apiserver.key  client-kubelet.key	client-supervisor.crt  etcd		      server-ca.crt	     service.current.key    serving-kube-apiserver.key

Apply the sudo cat /var/lib/rancher/k3s/server/manifests/web-helmchart.yaml

sudo cat /var/lib/rancher/k3s/server/manifests/web-helmchart.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: web
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: apache
  namespace: kube-system
spec:
  repo: https://charts.bitnami.com/bitnami
  chart: apache
  targetNamespace: web
  valuesContent: |-
    service:
      type: ClusterIP
    ingress:
      enabled: true
      hostname: www.example.com
    metrics:
      enabled: true
NAME                      READY   STATUS    RESTARTS   AGE
apache-7b965dd895-rmp2q   2/2     Running   0          16h

Verify with generated audit logs:

Followed steps provided here for creating the audit logs directory and starting the k3s service with the required args:
https://docs.k3s.io/security/hardening-guide#api-server-audit-configuration

sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/logs
sudo cat /var/lib/rancher/k3s/server/audit.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata 

Edit etc/systemd/system/k3s.service with lines:

ExecStart=/usr/local/bin/k3s \
    server \
    '--kube-apiserver-arg=audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log' \

edit the config file by adding:

kube-apiserver-arg:
  - 'audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log'
  - 'audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml'

then:

sudo systemctl daemon-reload
sudo systemctl restart k3s.service

Sample audit log lines verifying the 'system:k3s-supervisor' was used by the helm-controller:

{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"d4dda0ed-c112-4d41-86bb-9eb040e1ad1b","stage":"ResponseComplete","requestURI":"/api/v1/nodes?allowWatchBookmarks=true\u0026resourceVersion=4070\u0026timeout=7m2s\u0026timeoutSeconds=422\u0026watch=true","verb":"watch","user":{"username":"system:k3s-supervisor","groups":["system:masters","system:authenticated"]},"sourceIPs":["127.0.0.1"],"userAgent":"k3s-supervisor@ip-172-31-23-207/v1.27.3-rc1+k3s1 (linux/amd64) k3s/fe9604ca","objectRef":{"resource":"nodes","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2023-06-15T22:52:17.491702Z","stageTimestamp":"2023-06-15T22:59:19.492623Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}

In the previous version 1.27.2 the grep supervisor would return empty.

@github-project-automation github-project-automation bot moved this from To Test to Done Issue in K3s Development Jun 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement An improvement to existing functionality priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
Archived in project
Development

No branches or pull requests

2 participants