-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Status of project and documentation #42
Comments
This repo is kinda dead, yea. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
How fitting that @selslack comments that this repo is kinda dead, and the next comment is the bot adding a stale label lol... Is there any status on this? I've had the same trouble as @darwin67 with getting the external cloud provider working on k8s 1.17, since in-tree is deprecated now. The Azure and OpenStack cloud provider repos actually have documentation on getting those working, but nothing for this one. Is there anyone out there that's gotten this project working as an external cloud provider in recent versions of Kubernetes? |
would love to see action/support, or at least a definitive response from those that manage this repo. I was able to get the cloud controller and ebs driver working after hammering my head against it for a while, and building the cloud controller image myself, but it was far from pleasant, and the lack of support/responsiveness leads me to fear for future support of this config. Is Amazon abandoning kubernetes, or trying to force eveyrone to use EKS, or something else entirely? |
This repository is the right location for the external cloud controller manager, and I'll be spending much more time investing in it this year. At some point, likely this year, we will migrate the source for the AWS cloud provider from upstream to this repo. At that point, development will shift from upstream to here. For now, we are importing the upstream cloud provider and relying on bug fixes upstream. That being said, significant work this year needs to be done on testing and documentation in this repository to make it usable, and that's one of my highest priority goals. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@nckturner 👋! I'm wondering if you have any more updates since your last comment a few months ago? Thanks! /remove-lifecycle stale |
Same here looking for clear documentation. Does anyone figure how to deploy this into a kubernete daemonset as instructed in https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#examples? |
Hey, thanks for your interest! We are working on investing in documentation and publishing container images, but we're always looking for help! If your interested in contributing please let us (myself, @andrewsykim and @justinsb) know as we build out the documention! |
/assign |
@darwin67 regarding sample manifests, we added some in #93. Until we have a public image repo you have to build the image yourself though. |
@andrewsykim thanks for the update. great to see you joining as the owner and hope that this project will be getting more updates. |
Looking for some feedback on what the documentation for this project should look like, please comment #102 if you have thoughts/opinions. |
Is there any plan to hoist the legacy code from https://github.com/kubernetes/legacy-cloud-providers, and into this repo, so that the code can be edited in a central place? Alternatively, would you be unhappy if someone else did that @andrewsykim? I realize it's "ugly", but it seems like it'd unblock some contributions? |
I may have missed some context here. So currently the "central place" is in https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers/aws. That is consumed by the in-tree provider and eventually here via k8s.io/legacy-cloud-providers. This ensures we only have to maintain 1 provider at the moment. In the near future we will cut the tie to legacy-cloud-providers and port the provider into this repo and develop it here. But that can only happen once the in-tree providers are removed. Are you proposing to fork the current provider into this repo and develop them separately? |
@andrewsykim correct. The fact that any functionality change has to be made in that repo, and then this repo has to be updated is messy. It also makes maintaining our own patchset more difficult, as that other repo has a bunch of unrelated stuff. IMHO, it would be easier to declare bankruptcy on that existing repo and say it's EOL, and have people move to binaries from this repo -- And hoist the relevant AWS code into this repo. There are still aspects of legacy-cloud-providers we might want to use -- like configuration, but I don't see any reason to keep the AWS-specific functionality there. |
We need to be careful about breaking existing behavior. If we branch off we could lose bug fixes or accidentally break compatiblity for users migrating from in-tree to out-of-tree. I would be in favor of just starting a v2 provider on a clean slate and redesign it from the ground up (i.e. enabled with |
I would much rather see an incremental approach to a v2. We have immediate interests in features such as being able to make the node name the i-, or adding EC2 healthcheck info into the node conditions we're not wanting to scorch the earth over. I can put together a PR proposal, if you want. As far as I know this project has no official releases as of yet. We could do this, and release a 0.0.1alpha, or similar. |
This is totally fair, but many of the common feature requests from users like the node name change is very difficult to change without breaking existing clusters. The migration semantics get complicated very quickly. Starting on a clean slate here could possibly be less work overall.
Sure, I would be open to this and we can continue discussions there. Worth noting that we will likely cut an alpha version soon, we were just blocked on getting our GCR registry setup for a while (kubernetes/k8s.io#859). |
It's not terribly clear to me what the timeline is for removing in-tree provider support, but clearly this project needs to be up-and-running and probably in wide use before that can happen for the in-tree AWS support. I guess it depends on how different you envisage v1 and v2 being, from either a UX or code-design approach: whether cloud-provider-aws v1 and v2 ever need to be codeveloped, or if that division can be "in-tree"/"out-of-tree" forever. |
My thinking here is: v1 (current implementation) is both in-tree / out-of-tree with almost identical behavior. v2 can be a complete rewrite from scratch where we take the good from v1 and redo the bad. |
FYI folks, we cut the first alpha release https://github.com/kubernetes/cloud-provider-aws/releases/tag/v1.18.0-alpha.0 Please try it out and provide feedback, example manifest linked in the release notes. |
Another relevant question: Where should AWS Cloud Provider issues be lodged? The code lives in https://github.com/kubernetes/kubernetes/ but the code-ownership and future publishing vests here (I guess?). I'm noticing bug reports in both trackers, and sometimes for the same issue. |
@andrewsykim See here: #111 I still do not think "starting from scratch" is a great idea.... |
Starting a v2 provider from scratch wouldn't mean we abandon the existing one. There are some feature requests for the legacy provider, like the node and ELB name change, that are just too difficult to implement without breaking existing clusters. We can maintain both providers for the forseeable future. |
@TBBle I think either works, maybe using this repo would make them easier to find and fit better with future goals for the project, but I doubt we will be able to prevent others from filing issues at k/k, so we'll have to be cognizant of both. @sargun I appreciate your dilemma. I'm open to all options, but we really do have to be careful about breaking existing users. That being said we need a way to allow contributions that doesn't cause excessive friction. I'm guessing you've submitted your patches upstream at some point and they stagnated, could you link any PRs you have open? If not, let's at least start by opening PR's against k/k so we can discuss them, and decide between V2/copying code over into this repo/merging into upstream. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Following the announcement of
v1.15.0
, my understanding is that the future development of cloud providers are being moved out of the kubernetes core.Existing core including AWS has been moved to https://github.com/kubernetes/legacy-cloud-providers.
Which pretty much breaks the link in note and also seems like in-tree providers will be completely removed in the next couple releases.
We're currently on 1.14.3 and have tested 1.15.0 and can verify that the in-tree provider still works.
But the point I wanted to make is, this repo doesn't seem to have much activity compared to the other major cloud providers.
You also don't have any sample manifests for deploying the
cloud-controller-manager
and I haven't had any luck getting the aws provider to work as an external cloud provider ever since I first attempted at 1.13..Asking on kubernetes slack also haven't work well so far, which I suspect either no one had got this working, or simply no one cares.
Doing a google search and reading docs of the CCM and other cloud providers also got me no where close to a working example of this external provider.
So these are my requests:
Activity in this repo for the past 6+ months had mostly been cosmetic changes and I haven't seen anything related to feature updates, bug fixes or even documentations.
Please do forgive me if my tone is offending and I might just be paranoid for no good reason, but if you can at least clarify 1 for me, that can help me decide how to approach in the future. Either fork this repo and work on it on my own, or some other paths.
FYI:
We don't use EKS and have no plans on doing so in the future.
Thanks!
/triage support
The text was updated successfully, but these errors were encountered: