Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: SubResources for CustomResources #913

Merged

Conversation

nikhita
Copy link
Member

@nikhita nikhita commented Aug 15, 2017

CustomResourceDefinitions (CRDs) were introduced in 1.7. The objects defined by CRDs are called CustomResources (CRs). Currently, we do not provide subresources for CRs.

However, it is one of the most requested features and this proposal seeks to add /status and /scale subresources for CustomResources.

cc @sttts @deads2k @enisoc @bgrant0607 @erictune @lavalamp @brendandburns @philips @liggitt @mbohlool @fabxc @adohe @munnerz

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Aug 15, 2017
@nikhita
Copy link
Member Author

nikhita commented Aug 15, 2017

Ref: kubernetes/kubernetes#38113.

@sttts
Copy link
Contributor

sttts commented Aug 15, 2017

/cc @kubernetes/sig-api-machinery-api-reviews

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API labels Aug 15, 2017
@nikhita nikhita force-pushed the customresources-subresources branch from 4695745 to c642c92 Compare August 15, 2017 15:58
## Non-Goals

1. Allow defining arbitrary subresources i.e. subresources except `/status` and `/scale`.
2. Unify the many `Scale` types: as long as there is no generic `Scale` object in Kubernetes, we will propose to introduce yet another `Scale` type in the `apiextensions.k8s.io` api group. If Kubernetes switches to a generic `Scale` object, `apiextensions.k8s.io` will follow.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making scale operate with a different API type will prevent it from working with HPA. I'd expect it to use the same API type

Copy link
Contributor

@sttts sttts Aug 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same in which sense? Same GroupVersion? Extensionally equal?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@foxish Is the plan to move everything to autoscaling.Scale?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HPA should be able to speak multiple versions of the scale subresource exposed via discovery: kubernetes/kubernetes#49971 . I think I'd rather have autoscaling depend on apiextensions than the other way around.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An aside:

The main advantage of subresources is that they enable distinct authorization policies for the subresource endpoints.

To enable just polymorphism of spec and status, I want to make duck typing work, more like our metadata convention. That would be both a simpler mechanism and would be compatible with client-based tools. kubectl set replicas should be able to set the replicas field of any API that has a replicas field.

// StatusPath is restricted to be “.status” and defaults to “.status”.
StatusPath string `json:“statusPath,omitempty”`
// The JSON path (e.g. “.spec”) of the spec of a CustomResource.
// SpecPath is restricted to be “.spec” and defaults to “.spec”.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to make spec/status paths be specified if they are locked to a single path?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want the possibility to add fields later (i.e. no bool, but a struct) and we want to avoid an empty struct. Any better idea?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want the possibility to add fields later (i.e. no bool, but a struct) and we want to avoid an empty struct. Any better idea?

If you make it a pointer (like it is), doesn't the presence or absence of the field trigger you?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can choose to have fixed paths or what we have for EmptyDir today: StatusSubResource: {}.

Which way to we like more? I don't like the {} notation, but that probably the way to go in YAML, as ugly as it is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which way to we like more? I don't like the {} notation, but that probably the way to go in YAML, as ugly as it is.

I prefer the {} to fields that require specific values as a marker.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add fields to what? Going back to @liggitt's original question: I do not think the spec and status paths should be configurable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update the proposal. Now, status and spec paths are hardcoded to .status and .spec respectively and are no longer configurable.

StatusReplicasPath string `json:“statusReplicasPath,omitempty”`
// optional, e.g. “.spec.labelSelector”.
// Only JSON paths without the array notation are allowed.
LabelSelectorPath string `json:“labelSelectorPath,omitempty”`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Define behavior if the specified paths do not exist or do not contain valid data (non-integers for scale paths, non-selector data for the selector path, etc)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"do not exist" is described below. Invalid data after the path has been changed in a CRD (can it be changed?) must be added.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

### Feature Gate

The `SubResources` field in `CustomResourceDefinitionSpec` will be gated under the `CustomResourceSubResources` alpha feature gate.
If the gate is not open, the value of the new field within `CustomResourceDefinitionSpec` is rejected on creation and updates of CRDs.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, if the gate is off, the alpha fields should be dropped from incoming requests (see #869 (comment))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was not clear from existing implementation and the api convention text. Thanks for clarifying. This matches my understanding.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.


### Feature Gate

The `SubResources` field in `CustomResourceDefinitionSpec` will be gated under the `CustomResourceSubResources` alpha feature gate.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think I'd expect this to be "Subresources", since we generally say "subresources", not "sub resources" or "sub-resources"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought so too, until I found that we use SubResource in both client-go and as a field in AdmissionReviewSpec. Should we be consistent with English, or with the rest of k8s?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If nobody speaks up, I propose to use what we have here now. @liggitt last chance to articulate a strong opinion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no strong opinion

- The main resource endpoint will ignore all changes in the specified status subpath.
(note: it will **not** reject requests which try to change the status, following the existing semantics of other resources).

- The `.metadata.generation` field is updated if and only if the value at the specified `SpecPath` (e.g. `.spec`) changes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it prevent changing generation if spec does not change?

Copy link
Contributor

@sttts sttts Aug 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The understanding is that the update endpoint increases the generation iff it sees a difference in the spec. What is the expected behaviour about posting an object with modified Generation? Currently the proposal says that this value is overridden by the strategy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the expected behaviour about posting an object with modified Generation?

For core objects, we ignore the change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Kargakis thanks for clarification. So this means: ignore = we reset it to the old value if spec is unchanged.

In other words, metadata.Generation is completely uncontrollable for the controllers. This is probably a good thing for consistency.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For core objects, we ignore the change.

@Kargakis Maybe missing something here but I can't find any place where we reset metadata.Generation to the old value if spec is unchanged. One example: https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/core/replicationcontroller/strategy.go#L70.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We restore to the old generation one level prior to calling the resource-specific rest handlers in https://github.com/kubernetes/apiserver/blob/149fc2228647cea28b0670c240ec582e985e8eda/pkg/registry/rest/update.go#L100

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @Kargakis. Updated.

// label query over pods that should match the replicas count.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
// +optional
Selector *metav1.LabelSelector `json:"selector"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the external (versioned) structs, all of apps.ScaleStatus, extensions.ScaleStatus, and autoscaling.ScaleStatus use a string (encoded as a label query) for the selector. The internal structs vary: extensions.ScaleStatus uses *metav1.LabelSelector internally, but autoscaling.ScaleStatus keeps it as a string.

We should make this consistent, although I don't know with what.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. We did not really specify whether we describe the internal or external types. But the way to go for the selector in the external types is to use a string. Good catch. @nikhita can you update that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. Thanks, @enisoc.

#### Status

The status endpoint of a CustomResource receives a full CR object. Changes outside of the status path are ignored.
For validation everything outside of the JSON Path `StatusPath` can be reset to the existing values of the CustomResource. Then the JSON Schema presented in the CRD is validated against the whole object.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will still pass along the ResourceVersion from the update request to be verified, right? We need to be careful not to overwrite it with the old value.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it will use the new ResourceVersion value. 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the validation step the resource version is not important. It will play a role when sending the update object to the storage layer. For that the described semantics are applied, i.e. the status ResourceVersion is used in order to be consistent with optimistic consistency of the whole object.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/sub

@k8s-github-robot k8s-github-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Aug 15, 2017
@nikhita nikhita force-pushed the customresources-subresources branch 2 times, most recently from 1de0b1d to 7c9fcde Compare August 16, 2017 09:50

```go
type CustomResourceDefinitionSpec struct {
Group string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you omit all existing fields?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some ..., yes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@nikhita nikhita force-pushed the customresources-subresources branch from 7c9fcde to 567081d Compare August 17, 2017 09:27
The status endpoint of a CustomResource receives a full CR object. Changes outside of the status path are ignored.
For validation everything outside of the JSON Path `StatusPath` can be reset to the existing values of the CustomResource. Then the JSON Schema presented in the CRD is validated against the whole object.

Note: one could extract the status suboject, filter the JSON schema by rules applied to the status and then validate the status subobject only.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo in suboject

@nikhita nikhita force-pushed the customresources-subresources branch from 567081d to ff5d1de Compare August 17, 2017 12:08
SpecReplicasPath string `json:“specReplicasPath,omitempty”`
// optional, e.g. “.status.replicas”.
// Only JSON paths without the array notation are allowed.
StatusReplicasPath string `json:“statusReplicasPath,omitempty”`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the belief that we are too late to be prescriptive?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@enisoc ^^, you had the argument that we have plenty of paths already today in existing resources. Moreover, there might be a number of existing TPR/CRD users who have chosen their schema. By being prescriptive we make it impossible for them to opt-in.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My argument was that the purpose of the /scale subresource, as far as I know, is to abstract away the knowledge of which field a given object considers to represent its "scale". If we wanted to go the route of a convention for field names, like it must be .status.replicas, then there would be no need for a subresource. You could just look directly at .status.replicas.

This is in contrast to the /status subresource, which I argued ought to be restricted to the convention of .spec and .status field names. The /status endpoint returns the whole object, not a sub-field or a new type like /scale does, and so the only way the caller knows which part is the status is by agreeing on the conventional field name. Similarly, the caller needs to agree that .spec is the field name that triggers .metadata.generation to be incremented and so on.

Because /scale instead returns a separate, abstracted type, there is no need to agree on the underlying field names.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k looks like your comment is placed in the wrong thread. Can you elaborate what you mean worth fixing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there are CRDs out in the wild which have chosen a certain replicas path, I still think we should support abritrary JSON paths for the replicas fields. @deads2k is this something you can live with?

In contrast, I agree that for .spec and .status we should prescribe those paths. I would expect that 99% have users have chosen those anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k is this something you can live with?

I can live with arbitrary json paths for the replicas fields.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not ok with arbitrary paths.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bgrant0607 I don't care much for new CRDs. Forcing new users to use a certain replicas field path is fine. But with dropping the ability of custom paths IMO we fail on goal 4: transition of existing CRDs. If we accept that, fixed paths are fine.

}

// The following is the payload to send over the wire for /scale. It happens
// to be defined here in apiextensions.k8s.io because we don’t have a global
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And I don't think that we should. If polymorphic endpoints are to succeed, they need to be able to exist outside of meta.k8s.io.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a decission we have to make here now. I hope until implementation this topic is solved.

#### Status

The status endpoint of a CustomResource receives a full CR object. Changes outside of the status path are ignored.
For validation everything outside of the JSON Path `StatusPath` can be reset to the existing values of the CustomResource. Then the JSON Schema presented in the CRD is validated against the whole object.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd recommend skipping validation on the spec. If you tighten validation, you don't generally want to reject callers which are writing "this is the state of the world".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's the effect. It's just an implementation detail with the same semantics as "checking only the status".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @deads2k has a point. Consider:

  1. Create object.
  2. Tighten validation such that the existing object Spec no longer passes.
  3. Controller tries to update object Status, perhaps to indicate the object is invalid.

I'm not sure if this is worth fixing though, if all it would do is allow Status to be updated when the Spec is invalid. Users could still hit the same problem by tightening validation on Status itself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed that before (in the proposal I think) and concluded that the user is responsible to ensure that old objects validate. There is not much we can do. I don't see this case here any different.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed that before (in the proposal I think) and concluded that the user is responsible to ensure that old objects validate. There is not much we can do. I don't see this case here any different.

There's no way to identify all the validation rules under status. in the jsonpath description and only run those?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the very least for OneOf of JSON schema it's not that obvious how to cut down a schema in general. If I don't miss anything, the other JSON schema constructs are at least monotonic, i.e. if we remove rules from the schema it will become weaker. But still it needs some deep thought to implement such a projection to a JSON path. My current gut feeling is that with the exception of OneOf it's doable. My argument above was that it's not worth it to investigate or implementation this if we can test the whole object after restoring the spec and metadata.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think to avoid confusion, the paper should provide an example use case work flow, which shows how the data will be interpreted.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still feel pretty strongly about this. Looks like one of my previous comments landed in the wrong thread:

I'm not sure if this is worth fixing though, if all it would do is allow Status to be updated when the Spec is invalid. Users could still hit the same problem by tightening validation on Status itself.

It's been a big deal for first-class resources during upgrades. It's worth fixing.

My argument above was that it's not worth it to investigate or implementation this if we can test the whole object after restoring the spec and metadata.

That doesn't do the same thing. If validation is tightened on spec, you can still fail status updates on it. I don't even mind the validation running as long we never report or fail on bad spec.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k Updated the proposal. Now we validate only against the status.

- The value at the optional JSON Path `StatusReplicasPath` (e.g. `.status.replicas`) is validated to be a an integer number if it exists (i.e. this can be empty).

- The value at the optional JSON Path `LabelSelectorPath` (e.g. `.spec.labelSelector`) is validated to be a valid label selector if it exists (i.e. this can be empty).
If it is invalid, an empty `LabelSelector` is used.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this field allow invalid values?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By changing (or setting for the first time) LabelSelectorPath in an existing CRD. Then we have no chance to validate this post-creation in all CRs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use-case: opt-in to subresources later-on.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By changing (or setting for the first time) LabelSelectorPath in an existing CRD. Then we have no chance to validate this post-creation in all CRs.

In such a case, why shouldn't we return an error from the API until the data is correct? The data isn't doing what the user creating it thought it should. An empty label selector doesn't necessarily mean the same thing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"If it is invalid, an empty LabelSelector is used." means that a label selector pointing to something not-existing in the CR maps to the default value of an empty label selector.

In general, we don't want to synchronously check all CRs in etcd of a given GV against changes in these fields. I.e. we don't want to implement a validation which takes a snapshot of all CR of a CRD and verifies them. First, there can be many, second to make that consistent we have to disable updates on CRs during that time. So the choice here is to have a good enough fall-back that it does not break the CRD user's neck, but give him the responsibility to define consistent things for his CRs. Defining a CRD with subresources is nothing you do every day or even ad-hoc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have to clarify:

  • a given path is empty. This maps to the default value (0 for the replicas and empty for the label selector)
  • a given path exists, but it does not validate (no integer or no label selector). In that case we can return an error.

The error code, as this is in fact a server caused error, should be in the 5xx range. But also 422 (and strict following the spec also 403) are good candidates.

#### Status

The status endpoint of a CustomResource receives a full CR object. Changes outside of the status path are ignored.
For validation everything outside of the JSON Path `StatusPath` can be reset to the existing values of the CustomResource. Then the JSON Schema presented in the CRD is validated against the whole object.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think to avoid confusion, the paper should provide an example use case work flow, which shows how the data will be interpreted.


#### Status Replicas Behavior

As only the `scale.Spec.Replicas` field is to be written to by the CR user, the user-provided controller (not any generic CRD controller) counts its children and then updates the controlled object by writing to the `/status` subresource, i.e. the `scale.Status.Replicas` field is read-only.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nikhita I'm new to this part. Why is it that .spec.replicas is read-only?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nikhita I'm new to this part. Why is it that .spec.replicas is read-only?

we cannot write to the spec or the status of the CR using the /scale subresource. Additionally,

  • .spec.replicas can be written to by the CR user by updating the object.
  • .status.replicas can be written to by the CR controller using /status subresource.

Copy link
Member

@liggitt liggitt Aug 29, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typically, spec.replicas is mutable via the scale subresource. that is how the HPA controller would interact with this resource

Copy link
Contributor

@sttts sttts Aug 29, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify and sum up:

  • values in / .spec.replicas (the main resource) and /scale .spec.replicas are the same value. Both can be written if RBAC rules allow that.
  • values /status .status.replicas and /scale .status.replicas are the same value. The first can be written if RBAC rules allow that, the second can't.

In practice an autoscaling controller might only write /scale .spec.replicas (and RBAC will be restrictive to only allow that). A user might do kubectl edit and write to / .spec.replicas, or he uses kubectl scale which writes to /scale .spec.replicas. A controller will write to /status .status.replicas, but not to / .spec.replicas or /scale .spec.replicas.

@rjeberhard
Copy link

We are working on an operator and a CRD. With this proposed change and our creating the necessary SubResource, would we then be able to give administrators the option of scaling with HPA?

@enisoc
Copy link
Member

enisoc commented Sep 8, 2017

@rjeberhard wrote:

With this proposed change and our creating the necessary SubResource, would we then be able to give administrators the option of scaling with HPA?

Yup, that's the plan. There is another piece in addition to this that also needs to fall into place, though. Currently HPA can only work with things in the extensions API group, but they are planning to make it fully generic so it can work with any object that implements the Scale subresource according to the convention that we're proposing to support here.

// object sent as the payload for /scale. It allows transition
// to future versions easily.
// Today only autoscaling/v1 is allowed.
ScaleGroupVersion schema.GroupVersion `json:"groupVersion"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a logical type, but don't actually embed the type here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The external type will have a string, the internal the logical one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The external type will have a string, the internal the logical one.

Ok.

@deads2k
Copy link
Contributor

deads2k commented Oct 27, 2017

lgtm. squash for merge.

@nikhita nikhita force-pushed the customresources-subresources branch from 644b75c to f38bf97 Compare October 27, 2017 13:59
@nikhita
Copy link
Member Author

nikhita commented Oct 27, 2017

@deads2k Done.

@deads2k
Copy link
Contributor

deads2k commented Oct 27, 2017

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 27, 2017
@k8s-github-robot
Copy link

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot
Copy link

Automatic merge from submit-queue.

@k8s-github-robot k8s-github-robot merged commit ca27d20 into kubernetes:master Oct 27, 2017
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this pull request Feb 22, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: #58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: #55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: #56563, #58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```
k8s-publishing-bot added a commit to kubernetes/client-go that referenced this pull request Feb 27, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
k8s-publishing-bot added a commit to kubernetes/client-go that referenced this pull request Feb 27, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
k8s-publishing-bot added a commit to kubernetes/apiextensions-apiserver that referenced this pull request Feb 27, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
k8s-publishing-bot added a commit to kubernetes/apiextensions-apiserver that referenced this pull request Feb 27, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
sttts pushed a commit to sttts/client-go that referenced this pull request Mar 1, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
sttts pushed a commit to sttts/client-go that referenced this pull request Mar 1, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
sttts pushed a commit to sttts/apiextensions-apiserver that referenced this pull request Mar 1, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
sttts pushed a commit to sttts/apiextensions-apiserver that referenced this pull request Mar 1, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
@abbi-gaurav
Copy link

Is there any related issue for the status implementation. I can see one for the scale though.

// object sent as the payload for /scale. It allows transition
// to future versions easily.
// Today only autoscaling/v1 is allowed.
ScaleGroupVersion schema.GroupVersion `json:"groupVersion"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we agreed to remove this field in another thread? Did I miss some followup?

Copy link
Member Author

@nikhita nikhita Mar 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we agreed to remove this field in another thread? Did I miss some followup?

It is removed. 👍

The proposal included this field. It was later decided in the implementation PR to remove it and so it was removed there.

Only the proposal contains it. It needs to be updated (will do).

@nikhita
Copy link
Member Author

nikhita commented Mar 18, 2018

Is there any related issue for the status implementation. I can see one for the scale though.

@abbi-gaurav kubernetes/kubernetes#55168 contains the implementation for scale as well as status. You can find docs on how to use status here: https://deploy-preview-7439--kubernetes-io-vnext-staging.netlify.com/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#status-subresource.

tamalsaha pushed a commit to kmodules/shared-informer that referenced this pull request Aug 13, 2020
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
MadhavJivrajani pushed a commit to MadhavJivrajani/community that referenced this pull request Nov 30, 2021
…ources

Automatic merge from submit-queue.

Proposal: SubResources for CustomResources

[CustomResourceDefinitions](kubernetes#524) (CRDs) were introduced in 1.7. The objects defined by CRDs are called CustomResources (CRs). Currently, we do not provide subresources for CRs.

However, it is one of the [most requested features](kubernetes/kubernetes#38113) and this proposal seeks to add  `/status` and `/scale` subresources for CustomResources.

cc @sttts @deads2k @enisoc @bgrant0607 @erictune @lavalamp @brendandburns @philips @liggitt @mbohlool @fabxc @adohe @munnerz
danehans pushed a commit to danehans/community that referenced this pull request Jul 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.