-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 Refactor KubeadmConfig object update during cluster upgrades #4049
Conversation
/hold |
@srm09: GitHub didn't allow me to request PR reviews from the following users: shysank. Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
4ad6f75
to
c229ddd
Compare
/retitle 🌱 Refactor KubeadmConfig object update during cluster upgrades |
c229ddd
to
a395518
Compare
patchHelper, err := patch.NewHelper(kubeadmConfig.GetConfigMap(), r.Client) | ||
if err != nil { | ||
return ctrl.Result{}, err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabriziopandini @vincepri I was having a conversation with @shysank around this, and he mentioned that workload clusters might be managing their own config maps (and these config maps do not get stored in the management cluster). Is that actually the case, in which case, the client used to initialize the patchHelper above would not do the right thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The kubeadm configmap should be technically only managed through a Cluster API deployment (KCP in this case),
With the changes above, it doesn't seem to me that we'd overwrite other data, apart from the one that we're managing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the patchHelper initialized with the managementCluster.Client
would correctly persist the changes to the kubeadmConfig. Thanks for clearing it up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just want to confirm on the patch helper client. The code above gets the config map from the workload cluster, but the patch helper updates the config map in the management cluster. Shouldn't the patch helper also use the workload cluster's client? I'm not entirely sure if we should be updating the management cluster's kubeadm config map.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes ^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, I really like the fact that we are cleaning up the Workload Cluster interface.
I still have to complete the review, but I will start with few initial comments on the new KubeamConfig
interface
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@fabriziopandini I refactored the methods according to the pointers you had provided. I have a doubt that I need to clear and then I will finalize the changes in the PR |
@vincepri Added the change to update the |
/test pull-cluster-api-test-main |
/retest |
/milestone v0.4.0 |
dc1c5f0
to
7d29954
Compare
/hold cancel |
@srm09 Please squash :) |
Instead of issuing multiple update calls during the KCP upgrade, the reconciler makes the changes to the same kubeadmConfig object and the changes to the object are persisted using a patch() call. This also removes the intermediate methods on the WorkloadCluster interface and the reconciler calls these methods directly on the kubeadmConfig object. This also exposes a getter for workload cluster's client Signed-off-by: Sagar Muchhal <[email protected]>
7d29954
to
1e171e5
Compare
/lgtm |
Might need to rework this, this PR stopped working. If I could get some feedback around the approach, I can open another PR for this one. |
@srm09: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@srm09 I can chat more tomorrow sync if that helps, although I'd suggest to add more description about the solution chosen and the problem we're trying to solve |
@vincepri I tried revising this PR off the latest |
/lgtm cancel PR needs rebase |
The goal of this PR still is valid but I'm wondering we we should freeze this effort and reconsider at later stage because a lot is changing in this part of the codebase as a consequence of the the introduction of embedded kubeadm types & the introduction of the support for multiple kubeadm API versions. |
/close |
@srm09: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What this PR does / why we need it:
This patch removes the multiple update calls made to ETCD while updating the kubeadmConfig spec during upgrades. Instead, all the updates are batched into a single patch call made at the end of the reconcile loop.
Which issue(s) this PR fixes:
Fixes #4007