-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Change delete behaviour to respect inventory #5044
🐛 Change delete behaviour to respect inventory #5044
Conversation
Hi @killianmuldoon. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I've put this in WIP until I figure out a good way to test the actual issue encountered in #5015. |
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is a WIP, feel free to ignore comments if those parts are still being worked on
@@ -148,15 +157,17 @@ func Test_providerComponents_Delete(t *testing.T) { | |||
{object: corev1.ObjectReference{APIVersion: "v1", Kind: "Pod", Namespace: "ns2", Name: "pod3"}, deleted: false}, // this object is in another namespace, and should never be touched by delete | |||
{object: corev1.ObjectReference{APIVersion: "rbac.authorization.k8s.io/v1", Kind: "ClusterRole", Name: "ns1-cluster-role"}, deleted: true}, // cluster-wide provider components should be deleted | |||
{object: corev1.ObjectReference{APIVersion: "rbac.authorization.k8s.io/v1", Kind: "ClusterRole", Name: "some-cluster-role"}, deleted: false}, // other cluster-wide objects should be preserved | |||
{object: corev1.ObjectReference{APIVersion: "clusterctl.cluster.x-k8s.io/v1alpha3", Kind: "Provider", Name: "providerOne"}, deleted: false}, // providerInventory should be preserved |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let use the GroupVersion from the clusterctl API instead of "clusterctl.cluster.x-k8s.io/v1alpha3", so it will be easier to bump our API version in future
overall lgmt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the work @killianmuldoon 🙂
PR is LGTM for me besides a nit.
The original idea behind this PR was to make the clusterclt upgrade
re-entrant. Can we add some tests around this case to see if it works as expected. May be in a follow-up PR?
I've added a test for create in provider components to show that an upgrade / patch of existing components happens when an older version exists at create time. I chose to just use the pod image name to mock fine-grained upgrade of the spec of a component through the Create method. There's an existing test of this type already under providerInventory in itsTest_inventoryClient_Create. I think these two combined are enough to show that an upgrade following non-deletion of the provider inventory should perform properly. We're still missing an e2e test design for this change though. |
7014869
to
583f616
Compare
Add an option to the delete functions in clusterctl too allow user configuration of inventory deletion. Clusterctl no longer deletes provider inventories during an upgrade. This reduces the chance of an unrecoverable error during clusterctl upgrade. Signed-off-by: killianmuldoon <[email protected]>
583f616
to
c26190e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/assign @fabriziopandini
/lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fabriziopandini The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Add an option to the delete functions in clusterctl too allow user
configuration of inventory deletion. Clusterctl no longer deletes
provider inventories during an upgrade. This reduces the chance of
an unrecoverable error during clusterctl upgrade.
Signed-off-by: killianmuldoon [email protected]
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #5015