-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MachinePool experiment API group should be under cluster.x-k8s.io
#3424
Comments
/assign @mytunguyen |
@rudoi: GitHub didn't allow me to assign the following users: mytunguyen. Note that only kubernetes-sigs members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
yeah yeah CI bot, we know :( |
@mytunguyen You should open an issue in k8s-sigs to the into the org, I'm happy to sponsor as well. One thing to note, this would be for v1alpha4 and we haven't opened the main branch yet, if you want to get a head start please do :), just a warning that it might have to be rebased later |
/area api |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/kind api-change |
/assign |
@vincepri any thoughts on how to do this without completely breaking existing machine pool users? If we change the group now, existing machine pools won't be able to upgrade to new versions. Is that acceptable given that the feature was in exp until now? |
Yeah better to do this now, rather than later. We need to document the transition. There isn't any good way to do this without breaking user in one way or another; we could have a controller that looks for the old resources and recreates them, although that might cause the infrastructure to be deleted and recreated, unless maybe we have some way to make it seamless, which we might be able to do with MachinePool, given that machines are "virtual". One way we could approach this problem:
|
We could leverage Longer term, we could also potentially leverage the management cluster operator (?? can't remember what naming was decided upon) to perform these types of actions. I do think it would be good to make sure that we are publishing the old CRD info and have webhook validation to reject requests with a friendly message. |
IMO, a reconciler to migrate the CRDs is a bit overkill for an exp/ feature in an alpha API. We would be introducing more complexity and more code to maintain. I'm leaning towards ripping off the band-aid now and avoiding making the same mistake in the future with other exp APIs.
That seems like a reasonable compromise to me, especially if any users speak up and tell us they would suffer from a breaking change on MachinePools.
Definitely. /cc @devigned |
@CecileRobertMichon @devigned Should we do this in v0.4.0, as release blocker? |
/unassign @nprokopic you had expressed interest in keeping back compat for MachinePool in office hours if I recall correctly. This change is going to be breaking so we need to get it in before v0.4.0 I believe. Anyone interested in working on this? |
/kind release-blocking |
@CecileRobertMichon: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
During April 28th community meeting, most folks were ok with a breaking change without a migration path. We should definitely document the change and communicate it. /assign @CecileRobertMichon |
MachinePool is today an experiment, and the API group we originally decided to pick was
exp.cluster.x-k8s.io
. Given that the intent is in the future to move MachinePool to the core API group, we should rewrite the experiment to usecluster.x-k8s.io
or whatever the final group should be./kind cleanup
/milestone v0.4.0
The text was updated successfully, but these errors were encountered: