-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to using kubelet config file for all supported flags #10433
base: master
Are you sure you want to change the base?
Conversation
c6f443b
to
5b85991
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #10433 +/- ##
==========================================
- Coverage 47.83% 43.47% -4.37%
==========================================
Files 181 181
Lines 18794 18906 +112
==========================================
- Hits 8990 8219 -771
- Misses 8452 9479 +1027
+ Partials 1352 1208 -144
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
5b85991
to
e56d9f1
Compare
e56d9f1
to
ce0e400
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM pending RKE2 cgroup checks
I'm going to hold on this until the August release cycle, as I'd like more lead time on testing it. |
bf8a25f
to
c9b86b7
Compare
c9b86b7
to
c0a241f
Compare
@dereknola I have confirmed this works on Windows, using RKE2: rancher/rke2#6909 It looks like RKE2 adds some extra CLI flags to the kubelet, so those will need to be migrated over to config as well once this is merged: https://github.com/rancher/rke2/blob/b696280a35f87bb260f26b256d7be839f1945535/pkg/pebinaryexecutor/pebinary.go#L146-L168 root@dev02:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
dev02 Ready control-plane,etcd,master 3h42m v1.31.1+rke2r1 10.0.1.184 <none> Ubuntu 24.04 LTS 6.8.0-45-generic containerd://1.7.21-k3s2
win02 Ready <none> 3h31m v1.31.1 10.0.1.224 <none> Windows Server 2022 Standard Evaluation 10.0.20348.2113 containerd://1.7.21-k3s2
|
moving to WIP until closer to 1.32. |
Hey @brandond, are we still planning to do this for 1.32? |
I am watching what upstream does with the drop-in config dir feature in kubernetes/enhancements#3983. If it is on by default for 1.32 then I think we can move forward. |
Makes logged output more consistent when k3s fails during initialization Signed-off-by: Brad Davidson <[email protected]>
Expose actual error, so that we can tell if the deployment is not found or not ready/available Signed-off-by: Brad Davidson <[email protected]>
c0a241f
to
7c8994c
Compare
pkg/daemons/agent/agent.go
Outdated
// replace resolvConf with resolverConfig until Kubernetes 1.32 | ||
// ref: https://github.com/kubernetes/kubernetes/pull/127421 | ||
b = bytes.ReplaceAll(b, []byte("resolvConf: "), []byte("resolverConfig: ")) | ||
return os.WriteFile(filepath.Join(path, "00-"+version.Program+"-defaults.conf"), b, 0600) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be changed now that 1.32 is here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dropped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ugh nope, still required, I put it back.
root@systemd-node-1:/# grep resolv /var/lib/rancher/k3s/agent/etc/kubelet.conf.d/00-k3s-defaults.conf
resolvConf: /var/lib/rancher/k3s/agent/etc/resolv.conf
root@systemd-node-1:/# kubectl get --raw /api/v1/nodes/systemd-node-1/proxy/configz 2>/dev/null | jq . | grep resolv
"resolvConf": "/etc/resolv.conf",
root@systemd-node-1:/# echo -e "apiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nresolverConfig: /var/lib/rancher/k3s/agent/etc/resolv.conf" > /var/lib/rancher/k3s/agent/etc/kubelet.conf.d/99-resolv.conf
root@systemd-node-1:/# systemctl restart k3s
root@systemd-node-1:/# grep -rF resolv /var/lib/rancher/k3s/agent/etc/kubelet.conf.d/
/var/lib/rancher/k3s/agent/etc/kubelet.conf.d/00-k3s-defaults.conf:resolvConf: /var/lib/rancher/k3s/agent/etc/resolv.conf
/var/lib/rancher/k3s/agent/etc/kubelet.conf.d/99-resolv.conf:resolverConfig: /var/lib/rancher/k3s/agent/etc/resolv.conf
root@systemd-node-1:/# kubectl get --raw /api/v1/nodes/systemd-node-1/proxy/configz 2>/dev/null | jq . | grep resolv
"resolvConf": "/var/lib/rancher/k3s/agent/etc/resolv.conf",
0f93378
to
7f6911b
Compare
Signed-off-by: Brad Davidson <[email protected]>
7f6911b
to
976838f
Compare
Signed-off-by: Brad Davidson <[email protected]>
Avoid "snapshot save already in progress" flake when snapshot reconcile from previous save is still in progress. Signed-off-by: Brad Davidson <[email protected]>
976838f
to
0d6cc13
Compare
Proposed Changes
Switch to using kubelet config file for all supported flags
Types of Changes
enhancement / tech debt
Verification
See linked issue
Testing
Linked Issues
User-Facing Change
Further Comments
Currently using a hacky string replace when marshaling the config file to work around an upstream issue: