-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-2590: graduate kubectl subresource support to stable #4468
KEP-2590: graduate kubectl subresource support to stable #4468
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you so much for moving this along, @MadhavJivrajani!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor nits, the biggest one is you're missing questions from the updated template (https://github.com/kubernetes/enhancements/blob/master/keps/NNNN-kep-template/README.md), specifically:
###### How can someone using this feature know that it is working for their instance?
###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)?
@MadhavJivrajani ping me on slack, when you get that updated so I can quickly review that PR and give it some time for PRR |
Signed-off-by: Madhav Jivrajani <[email protected]>
4439f98
to
c7d033a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
from sig-cli pov
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: MadhavJivrajani, soltysh The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign @johnbelamaric |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PRR shadow
Considering that this feature is a flag contained fully within in the `kubectl` client, there exist no | ||
monitoring requirements for the same. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not true, this KEP required server side changes. For example, kubernetes/kubernetes#103516 was added in v1.24. From that PR, I do not see any generic integration test that was added to make sure that all scale
sub-resources continue to support tables going forward. This needs to be added to prevent the feature from regressing in the future for new built-in types. This is not an issue for CRDs because they all use the same code path. This is (generally) not an issue for the status
sub-resource because that uses the same code path as the regular resource, but it may be good to add an integration test just in case.
In fact, it is reassuring to see the community use this feature quite commonly such as in bug reports: | ||
https://github.com/kubernetes/kubernetes/issues/116311 | ||
|
||
Seeing this and given our added unit, integration and e2e tests gives us the confidence to graduate to GA. | ||
|
||
|
||
### Upgrade / Downgrade Strategy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update all the relevant PRR/version skew sections to consider server side changes like kubernetes/kubernetes#103516
###### How can an operator determine if the feature is in use by workloads? | ||
|
||
N/A | ||
|
||
###### How can someone using this feature know that it is working for their instance? | ||
|
||
N/A |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can I use the kube audit logs to find instances of kubectl
interacting with sub-resources? Not sure why I would need to do that, but can I?
/lgtm cancel |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/assign @soltysh
/cc @nikhita