-
Notifications
You must be signed in to change notification settings - Fork 5
Adapt flags of control plane components #83
Conversation
Skipping CI for Draft Pull Request. |
9258122
to
56c9eed
Compare
56c9eed
to
57c1725
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I asked for more config options
@@ -3,9 +3,11 @@ | |||
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 | |||
kind: KubeadmConfigTemplate | |||
metadata: | |||
finalizers: | |||
- cluster-api-cleaner-openstack.finalizers.giantswarm.io |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the future idea that cluster-api-cleaner-openstack removes the finalizer once the kubeadmconfigtemplate
isn't needed anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q1: should we rethink the scope of cluster-api-cleaner-openstack
sometime again?
A helper operator for CAPO to delete resources created by apps in workload clusters.
Q2: does it make sense to let cluster-api-cleaner-openstack
set the finalizer as well? Most controllers i've seen so far are adding/removing the finalizers by their own.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this be needed in other providers as well? Would it make sense to solve it for all instead of using cluster-api-cleaner-openstack
?
In the kaas-sync yesterday, we decided to implement a CAPI generic operator for this issue. I am going to implement it and then merge this PR. |
fabfd82
to
76e8dc3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGMT when Jose is happy
I believe |
Yeah. I mentioned it in the PR description.
|
Sorry, I missed that. |
76e8dc3
to
0b689fd
Compare
@@ -63,6 +63,10 @@ room for such suffix. | |||
sudo: ALL=(ALL) NOPASSWD:ALL | |||
{{- end -}} | |||
|
|||
{{- define "kubeletExtraArgs" -}} | |||
{{- .Files.Get "files/kubelet-args" -}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not inline?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't it better to have those flags in a separate valid yaml file instead of embedding a shitty templating code?
We will use https://github.com/giantswarm/deletion-blocker-operator to block deletion of templates. Necessary finalizers will be added by the operator during runtime. We don't need this hack here anymore.
0b689fd
to
34acb9f
Compare
Towards giantswarm/roadmap#687
This PR tries to adapt the configuration of api-server, kubelet, controller-manager, etcd, scheduler and kube-proxy for CAPO to keep it consistent with other providers such as AWS and KVM.
Upgrade
While developing this PR, I noticed that changes in KubeadmConfigTemplate don't trigger any rollout for worker nodes. Then I found these:
Then added
kubeAdmConfigTemplateRevision
as suffix toKubeadmConfigTemplate
to trigger upgrades. Unfortunately, we are not allowed to delete old templates since capi controllers cannot finish the upgrade without them. We are going to usehttps://github.com/giantswarm/deletion-blocker-operator
to block the deletion of templates.The invisible decisions in this PR:
api-server
PodSecurityPolicy
is not added to--feature-gates
. It will be handled with another PR. See Support PodSecurityPolicies in CAPO roadmap#1148RemoveSelfLink
is not added to--feature-gates
.--authorization-mode
isNode,RBAC
by default in CAPO. WhenNode
is removed,pods
inkube-system
cannot be listed. Didn't touch this flag.--requestheader-allowed-names
wasfront-proxy-client
before this PR but it is updated as in the PR.--tls-cipher-suites
is updated as it is in https://github.com/giantswarm/giantnetes-terraform/pull/585/files--anonymous-auth
isfalse
in vintage clusters but setting it false breaks kubeadm and nodes cannot join the cluster in CAPI clusters.etcd
--snapshot-count=10000
. Didn't touch that flag.kubelet
/var/lib/kubelet/config.yaml
in our image.--anonymous-auth
is alreadytrue
by default but I saw it is set as false in/var/lib/kubelet/config.yaml
in our image. It is better to set it explicitly.--cgroup-driver
issystemd
in/var/lib/kubelet/config.yaml
in our image whereas it iscgroupfs
by default in kubelet. Didn't touch it.--resolv-conf
is/run/systemd/resolve/resolv.conf
in/var/lib/kubelet/config.yaml
whereas it is/etc/resolv.conf
by default. Didn't touch it.kernelMemcgNotification
istrue
in vintage AWS but it is going to be deleted in 1.24. Therefore, didn't add to this PR.serializeImagePulls=false
registryBurst=3
registryPullQPS=2
maxPods=32
controller-manager
--controllers
is*,bootstrapsigner,tokencleaner
for CAPO. My understanding is that those are necessary for kubeadm. https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/--terminated-pod-gc-threshold=10
flag but didn't see any particular reason to decrease this value for CAPO.--port=0
. It is OK. See kubeadm: add --port=0 for kube-controller-manager and kube-scheduler kubernetes/kubernetes#92720--v=2
but I think it is OK to go with the default value.scheduler
AWS
has a memory request. The one inKVM
doesn't. Now, we don't have it forOpenStack
as well. It is the single diff in scheduler configuration.kube-proxy
Testing
Checklist
CHANGELOG.md
.values.yaml
andvalues.schema.json
are valid.