Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support provider deprecation #5152

Closed
richardcase opened this issue Aug 25, 2021 · 16 comments
Closed

Support provider deprecation #5152

richardcase opened this issue Aug 25, 2021 · 16 comments
Labels
area/clusterctl Issues or PRs related to clusterctl help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@richardcase
Copy link
Member

User Story

As a provider implementer, I would like to be able to deprecate a provider to support changes in the provider types / deployment options for a provider.

Detailed Description

In CAPA prior to version 0.7.0 we had infrastructure, control plane and bootstrap providers. This separation caused some technical and UX issues. So with the 0.7.0 release we have removed the separate control plane and bootstrap providers and combined them into the infrastructure provider.

It would be good if there was the ability to mark a provider as deprecated from a specific version (from 0.7.0 in out case for the aws-eks control plane and bootstrap providers).

We can't just removethe providers as someone may use clusterctl to install a version prior to 0.7.0

/kind feature
/area clusterctl

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. area/clusterctl Issues or PRs related to clusterctl labels Aug 25, 2021
@richardcase
Copy link
Member Author

/assign

@richardcase
Copy link
Member Author

@vincepri
Copy link
Member

It might be good to have support for deprecation in the metadata file if possible, and include also a deprecation message

@sbueringer
Copy link
Member

Mirroring what I wrote in Slack. Right now we have the following situation:

  • clusterctl v0.3.x: is able to install v1alpha3 providers (CAPA < v0.7.0)
  • clusterctl v0.4.x: is able to install v1alpha4 providers (CAPA >= v0.7.0) and only to upgrade v1alpha3 providers to v1alpha4

So tl;dr current clusterctl is already unable to install your old providers as they are not using the v1alpha4 contract.

@richardcase
Copy link
Member Author

Thanks @sbueringer - this behaviour solves the short-term problem for CAPA.

Do we see a need to support provider deprecation in the future?

@sbueringer
Copy link
Member

+1 as it provides a better UX for end users and then less questions in Slack :)

@vincepri
Copy link
Member

/milestone v1.0

@k8s-ci-robot k8s-ci-robot added this to the v1.0 milestone Sep 30, 2021
@richardcase
Copy link
Member Author

/unassign

@richardcase
Copy link
Member Author

/help

@k8s-ci-robot
Copy link
Contributor

@richardcase:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Oct 7, 2021
@vincepri vincepri modified the milestones: v1.0, v1.1 Oct 22, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2022
@fabriziopandini fabriziopandini modified the milestones: v1.1, v1.2 Feb 3, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 5, 2022
@richardcase
Copy link
Member Author

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Mar 15, 2022
@fabriziopandini fabriziopandini added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini fabriziopandini removed this from the v1.2 milestone Jul 29, 2022
@fabriziopandini fabriziopandini removed the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Aug 5, 2022
@fabriziopandini
Copy link
Member

(doing some cleanup on old issues without updates)
/close
unfortunately, no one is picking up the task. the thread will remain available for future reference

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini: Closing this issue.

In response to this:

(doing some cleanup on old issues without updates)
/close
unfortunately, no one is picking up the task. the thread will remain available for future reference

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/clusterctl Issues or PRs related to clusterctl help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

6 participants