Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add generic hook to customize component flags #290

Closed
ibuildthecloud opened this issue Mar 31, 2019 · 3 comments
Closed

Add generic hook to customize component flags #290

ibuildthecloud opened this issue Mar 31, 2019 · 3 comments
Assignees
Labels
kind/feature A large new piece of functionality
Milestone

Comments

@ibuildthecloud
Copy link
Contributor

Currently k3s sets a series of flags on the kube-apiserver, controller-manager, scheduler, kube-proxy, and kubelet. The flags are essentially hard coded and non-user customizable.

I'd like a generic hook to be added so that on k3s server or k3s agent one can add any k8s component flag and it will be set on the component. This can be accomplished by doing a format like --component-flag=value. For example --kubelet-allow-privileged=false. The code would just look for all arguments starting with kubelet, apiserver, controller-manager, scheduler, or proxy (or maybe have kube-* in the beginning too) and them just strip the prefix and pass to the right component.

Since we don't want to do fancy arg parsing we would only accept the single arg syntax and not the double arg syntax. What I mean by that is that --kubelet-insecure-port=1234 would be accepted but not --kubelet-insecure-port 1234. Maybe we could accept the second form, but I'm afraid we might mess up parsing (or maybe it's actually really easy?).

Related #282

@ibuildthecloud ibuildthecloud added this to the v0.4.0 milestone Mar 31, 2019
@erikwilson erikwilson added the kind/feature A large new piece of functionality label Apr 2, 2019
@uablrek
Copy link

uablrek commented Apr 3, 2019

Will there be a PR for this very soon?

I ask because I need to alter flags for testing with ipv6 #284 . I already build k3s so a re-build is no problem, but if the customize component flags is far in the future I probably will do something temprary on my own just to be able to continue.

@uablrek
Copy link

uablrek commented Apr 8, 2019

I have applied this patch and was able to start an ipv6-only cluster #284 .

@dnoland1
Copy link
Contributor

Verified I could add --v=99 flag for kube-apiserver, kube-scheduler, kube-controller, and kubelet:

root@k3s-node1:~# k3s server --kube-apiserver-arg="v=99" --kube-scheduler-arg="v=99" --kube-controller-arg="v=99" --kubelet-arg="v=99"
INFO[2019-04-16T00:45:45.954361685Z] Starting k3s v0.4.0-rc3 (be24f83)
INFO[2019-04-16T00:45:45.956690557Z] Running kube-apiserver --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --bind-address=127.0.0.1 --requestheader-extra-headers-prefix=X-Remote-Extra- --service-account-issuer=k3s --requestheader-allowed-names=kubernetes-proxy --tls-cert-file=/var/lib/rancher/k3s/server/tls/localhost.crt --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --kubelet-client-key=/var/lib/rancher/k3s/server/tls/token-node.key --v=99 --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --tls-private-key-file=/var/lib/rancher/k3s/server/tls/localhost.key --requestheader-username-headers=X-Remote-User --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --requestheader-group-headers=X-Remote-Group --service-cluster-ip-range=10.43.0.0/16 --advertise-port=6445 --watch-cache=false --insecure-port=0 --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --allow-privileged=true --advertise-address=127.0.0.1 --secure-port=6444 --api-audiences=unknown --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/token-node.crt --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt
INFO[2019-04-16T00:45:46.116230297Z] Running kube-scheduler --leader-elect=false --v=99 --port=10251 --bind-address=127.0.0.1 --secure-port=0 --kubeconfig=/var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml
INFO[2019-04-16T00:45:46.119174152Z] Running kube-controller-manager --allocate-node-cidrs=true --port=10252 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --cluster-cidr=10.42.0.0/16 --leader-elect=false --bind-address=127.0.0.1 --secure-port=0 --kubeconfig=/var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --root-ca-file=/var/lib/rancher/k3s/server/tls/token-ca.crt --v=99
INFO[2019-04-16T00:45:46.388339532Z] Listening on :6443
INFO[2019-04-16T00:45:46.491998671Z] Node token is available at /var/lib/rancher/k3s/server/node-token
INFO[2019-04-16T00:45:46.494161960Z] To join node to cluster: k3s agent -s https://10.0.2.15:6443 -t ${NODE_TOKEN}
INFO[2019-04-16T00:45:46.493923648Z] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.64.0.tgz
INFO[2019-04-16T00:45:46.510431995Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
INFO[2019-04-16T00:45:46.526818516Z] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml
INFO[2019-04-16T00:45:46.606831444Z] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml
INFO[2019-04-16T00:45:46.607527056Z] Run: k3s kubectl
INFO[2019-04-16T00:45:46.607980183Z] k3s is up and running
INFO[2019-04-16T00:45:46.700734190Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[2019-04-16T00:45:46.701382539Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
INFO[2019-04-16T00:45:46.722146705Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory"
INFO[2019-04-16T00:45:47.725300547Z] Connecting to wss://localhost:6443/v1-k3s/connect
INFO[2019-04-16T00:45:47.726006857Z] Connecting to proxy                           url="wss://localhost:6443/v1-k3s/connect"
INFO[2019-04-16T00:45:47.731045242Z] Handling backend connection request [k3s-node1]
INFO[2019-04-16T00:45:47.733865531Z] Running kubelet --resolv-conf=/run/systemd/resolve/resolv.conf --allow-privileged=true --cluster-domain=cluster.local --root-dir=/var/lib/rancher/k3s/agent/kubelet --cert-dir=/var/lib/rancher/k3s/agent/kubelet/pki --hostname-override=k3s-node1 --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --seccomp-profile-root=/var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir=/var/lib/rancher/k3s/data/767edfed063688477421a09af56486c3bf8181ea873bcf07b34273c7675f1988/bin --cluster-dns=10.43.0.10 --container-runtime=remote --serialize-image-pulls=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.pem --runtime-cgroups=/systemd/user.slice/user-1000.slice --healthz-bind-address=127.0.0.1 --read-only-port=0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeconfig.yaml --address=127.0.0.1 --v=99 --kubelet-cgroups=/systemd/user.slice/user-1000.slice --fail-swap-on=false --authentication-token-webhook=true --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --anonymous-auth=false
Flag --allow-privileged has been deprecated, will be removed in a future version

Didn't test kube-proxy since it appears k3s does not have INFO logs that show the flags used to start kube-proxy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature A large new piece of functionality
Projects
None yet
Development

No branches or pull requests

7 participants