-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CAPZ should use Out of Tree cloud-controller-manager
and Storage Drivers
#715
Comments
The 2nd goal "Default should be OOT" is something we're not necessarily ready for. I think for now we want to support optionally using OOT (without any manual steps, possibly using ClusterResourceSet), but I don't we'll want to move this to be the default right away to align with other Azure provisioning tools. cc @feiskyer @ritazh See kubernetes/enhancements#667 for current Azure OOT provider status |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/priority important-longterm |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lifecycle frozen status update:
done
done
hold until OOT is fully ready
Added tests for OOT (already testing in tree). Not testing migration currently.
Done in #1216 |
Now that v1.0.0 has been released, we should be able to move forward with this |
/assign |
cc @sonasingh46 |
I have been trying to validate this manually. Especially around kubernetes 1.22 --> 1.23 upgrade paths. AzureDiskCSI Driver As an effort to extract the cloud provider dependency from Kubernetes, the cloud provider dependent code is moving out from the in-tree Kubernetes. From Kubernetes version 1.23 the The in-tree azureFileCSIDriver will continue to work in 1.23 as azureFileCSIDriver migration is not enabled by default in 1.23. If azureFileCSIDriver migration is enabled by user/admin then external azureFileCSIDriver needs to be installed. Consider the following upgrade paths from v1.22 to v1.23: Scenario1: Upgrade cluster from Kubernetes version 1.22 to 1.23 without any extra tuning and configuration
Scenario2: Upgrade cluster from Kubernetes version 1.22 to 1.23 by disabling AzureDiskCSIMigration
PS: Still validating other scenarios |
Scenario3: Upgraded cluster from Kubernetes version 1.22 to 1.23 by enabling external cloud provider
|
@jackfrancis and @Jont828, is this something that should land in milestone v1.5, or will it probably hit the next one? |
I'm not too sure, is there a PR open or being worked on for this ATM? Looks like Jack was assigned on it so maybe we can ask him when he's back. |
I think we can land this in the next milestone |
/milestone next |
/assign |
Dependencies
Goals
Non-Goals/Future Work
User Story
As an operator I would like to separate the cloud provider integration from the kubernetes binaries and use the newer Storage Drivers and
cloud-provider-azure
.Detailed Description
In 2018/2019 Kubernetes started to externalize interactions with the underlying cloud provider to slow down the growth in size of Kubernetes binaries and to decouple the lifecycle and development of Kubernetes from that of the individual cloud provider integrations.
https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/
/kind proposal
The text was updated successfully, but these errors were encountered: