Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

⚠️ Implement Patch method #235

Merged
merged 2 commits into from
Mar 19, 2019

Conversation

adracus
Copy link
Contributor

@adracus adracus commented Dec 5, 2018

This adds the Patch method to the Client interface as well as to the two implementations (typedClient, unstructuredClient).

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Dec 5, 2018
@adracus
Copy link
Contributor Author

adracus commented Dec 5, 2018

/assign @DirectXMan12

@k8s-ci-robot
Copy link
Contributor

@adracus: GitHub didn't allow me to assign the following users: grantr.

Note that only kubernetes-sigs members and repo collaborators can be assigned.
For more information please see the contributor guide

In response to this:

/assign @DirectXMan12 @grantr

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@DirectXMan12
Copy link
Contributor

/hold

Unfortunately, server-side apply hasn't landed yet, but I still feel like this is a really awkward structure to expose to users, especially as one that we have to support into perpetuity.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 5, 2018
@DirectXMan12
Copy link
Contributor

/priority important-soon

had a few more people ask about this. I think we may need to go forward with some patch-like thing.

@k8s-ci-robot k8s-ci-robot added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Feb 27, 2019
@adracus
Copy link
Contributor Author

adracus commented Mar 4, 2019

@DirectXMan12 do you have more input on this? I'd be up to change this PR to suit these needs.

@adracus adracus changed the title ✨ Implement Patch method ✨ Implement Patch method Mar 4, 2019
ingvagabund added a commit to gofed/cluster-api that referenced this pull request Mar 7, 2019
The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
ingvagabund added a commit to gofed/cluster-api that referenced this pull request Mar 7, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
ingvagabund added a commit to gofed/cluster-api that referenced this pull request Mar 12, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
openshift-merge-robot pushed a commit to openshift/cluster-api that referenced this pull request Mar 12, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
@@ -121,6 +123,15 @@ func (c *client) Delete(ctx context.Context, obj runtime.Object, opts ...DeleteO
return c.typedClient.Delete(ctx, obj, opts...)
}

// Patch implements client.Client
func (c *client) Patch(ctx context.Context, pt types.PatchType, data []byte, obj runtime.Object, subresources ...string) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to make this a splat of PatchOptions like we have for ListOptions and DeleteOptions to allow for future expansion. We can then create an option method for subresource.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

our past discussed subresource plan has been a .Subresource method, but I'm open for debate on that. Otherwise agree that using a splat for subresource isn't the right way to go

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the subresources splat and exchanged it for PatchOptionFunc. However, since there was no final clarity yet how subresources are handled, the PatchOptions are empty for now.
When we finally agree on how to do subresource handling in the future, we can incorporate it without changing the API.

@DirectXMan12
Copy link
Contributor

Ok, so, to move forward:

  • no splat for subresource (see comment)
  • move patch type and data into a single type that's an interface. Something along the lines of
type Patch interface {
    Type() types.PatchType
    Data(obj runtime.Object) []byte
}

This way, we can provide nicer abstractions on top of the patch method (e.g. passing []jsonpatch.Operation instead of raw bytes, just using client.Apply instead of having to construct an apply patch, etc.

Also (this is a bit more just preference), I feel it looks more natural to have object then patch data for the arguments order.

/hold cancel

Sorry for all the foot-dragging on this :-(

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 14, 2019
@adracus
Copy link
Contributor Author

adracus commented Mar 15, 2019

I implemented the spec almost as you described it:
For the Patch interface I removed the obj *runtime.Object part, since then in the implementation of it you'd have to do nasty casting again - see the discussion we had on controllerutil.CreateOrUpdate.

To also show that this approach is viable, I went ahead and implemented a CreateMergePatch method in controllerutil in which you see how to create patches without requiring the obj argument to the Data method.

@DirectXMan12 DirectXMan12 changed the title ✨ Implement Patch method ⚠️ Implement Patch method Mar 15, 2019
@DirectXMan12
Copy link
Contributor

This is a breaking change since it modifies an interface, so I've edited the title.

For the Patch interface I removed the obj *runtime.Object part,

I put the runtime.Object part in there so that we could cleanly have server-side apply work nicely:

cl.Patch(ctx, &desiredObject, client.Apply)

It also means that you can do stuff like

cl.Patch(ctx, &desiredObject, client.MergedFromOld(&oldObject))

Then, client.Apply should look something like

type applyPatcher struct {}
func (p applyPatcher) Type() types.PatchType { return types.ApplyPatchType /* or whatever it actually is */ }
func (p applyPatcher) Data(obj runtime.Object) ([]byte, error) {
    return runtime.Encode(unstructured.UnstructuredJSONScheme, obj) /* maybe should actually be an encoder and not the unstructured scheme, but doesn't really matter too much */
}

@adracus adracus changed the title ⚠️ Implement Patch method ⚠️ Implement Patch method Mar 15, 2019
@adracus
Copy link
Contributor Author

adracus commented Mar 15, 2019

Alright I tried to reflect these changes, PTAL

@DirectXMan12
Copy link
Contributor

Looks good! I'll follow up with some more patch types (json, server-side apply).

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 15, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: adracus, DirectXMan12

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Mar 15, 2019
@DirectXMan12
Copy link
Contributor

pushed some fixes + options support, so we can get this merged.

@DirectXMan12
Copy link
Contributor

I'll do the following in a follow-up PR:

  • merge patch --> strategic merge patch (you generally want the latter)
  • server side apply (you actually want this if available)

This supports passing update options to patch options (e.g. dry-run
options).
@DirectXMan12
Copy link
Contributor

oof, that was a weird flake

@mengqiy
Copy link
Member

mengqiy commented Mar 19, 2019

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 19, 2019
@k8s-ci-robot k8s-ci-robot merged commit 8d94f66 into kubernetes-sigs:master Mar 19, 2019
ingvagabund added a commit to gofed/cluster-api that referenced this pull request Mar 20, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
@adracus adracus deleted the feature.patch branch March 21, 2019 15:07
ingvagabund added a commit to gofed/cluster-api that referenced this pull request Mar 25, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
ingvagabund added a commit to gofed/cluster-api that referenced this pull request Mar 25, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
ingvagabund added a commit to gofed/cluster-api that referenced this pull request Mar 25, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
ingvagabund added a commit to gofed/cluster-api that referenced this pull request Mar 25, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
openshift-merge-robot pushed a commit to openshift/cluster-api that referenced this pull request Mar 26, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
ingvagabund added a commit to gofed/cluster-api that referenced this pull request May 21, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
ingvagabund added a commit to ingvagabund/cluster-api that referenced this pull request Jun 3, 2019
…achine deletion

The node draining code itself is imported from github.com/openshift/kubernetes-drain.

At the same time it's currently impossible to use the controller-runtime client for node draining
due to missing Patch operation (kubernetes-sigs/controller-runtime#235).
Thus, the machine controller needs to initialize kubeclient as well in order to
implement the node draining logic. Once the Patch operation is implemented,
the draining logic can be updated to replace kube client with controller runtime client.

Also, initialize event recorder to generate node draining event.
DirectXMan12 pushed a commit that referenced this pull request Jan 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants