-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[v1alpha2]: prepare for v1alpha2 adoption #380
Comments
I would start working on this |
yes, let's joint work on this, I am studying and preparing for this as well |
Yup okay. I need a few days to catch up to the current status of v1alpha2 |
It would be nice if we could get this done first: #382 I have all the code here. Just have to merge the APIServerLB PR and one final on to implement kubeadm. If we move to v1alpha2 first I fear I have to reimplement a lot of that stuff. |
we may need create another branch for v1alpha2 ? |
Definitely. I'm just looking at AWS and I guess it will be more or less half rewrite and the other half copy&pase of existing OpenStack-specific code. That's why it would be nice to have the kubeadm change on master before start copying it on the v1alpha2 branch to avoid a lot of sync effort. |
I'm interested in helping out with this effort. LMK how I can help |
kubernetes-sigs/cluster-api-provider-aws#933 here's an issue with some guidance we could extrapolate from |
Consider either tagging this issue, or adding a |
That's nice to here, thx! I'm not really sure how we should do the migration to v1alpha2. I think we should do it in the following order:
In my opinion we should finalize the HA control plane feature before, because it's really big and to combine this with a really complex refactoring to v1alpha2 makes it far more complicated. Regarding the migration to v1alpha2 itself. If you look at the first PR in CAPA (892). It's a really huge & complicated refactoring. We probably should do both refactorings on another branch and "freeze" development of major features no master in the meantime to reduce the sync effort between both branches. I'm not sure how we can split these two big refactorings in smaller subtasks. What do you think? |
If I get some time this week I'll look at how to break them down into smaller subtasks. 892 PR is so big it actually crashed my chrome tab lol. Lots of repo cleanup happening in that PR. I believe there is a different way to structure the repo based on kubebuilder best practices. Not sure if that should be done as part of the refactor. But if it isn't done as part of this refactor, I feel it's less likely to happen in the future. Regarding |
so to summarize ,we seems need a release-0.1 branch for v1alpha1 and master for v1alpha2 branch? any comments? |
@CamelCaseNotation You're absolutely right regarding #382. It's a stop-gap solution for v1alpha1 and until the kubeadm bootstrap in v1alpha2 is fully functional. Do you know if it's already implemented? @jichenjc I would use a release-0.1 branch for v1alpha1 and then implement v1alpha2 on master one step/PR after another |
https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/developer/v1alpha1-compared-to-v1alpha2.md might help some concepts a little bit |
@sbueringer #382 can be merged into the v1alpha1 branch for completeness. kubeadm bootstrap work is underway, not yet fully functional AFAIK. @jichenjc what I got from the comparison you linked is CAPO should develop additional infrastructure creation features besides only instances. I believe @sbueringer's load balancer and FIP creation logic would fit into this well |
There is a nice guide here on how to update: https://github.com/kubernetes-sigs/cluster-api/blob/a3c9afc487b107b76d2ecf8a90ae3ff1c942b9c2/docs/developer/update-v1alpha1-provider-to-v1alpha2-infra-provider.md I think after multi-node-control plane PR is merged & fixed we can start working on this. Not sure how the best mode would be. I would like to do it as fast as possible to not freeze the master branch too long. I would als create a release-0.1 branch for v1alpha1 as recommended in the doc |
I would start working on this now after the multi-node control plane PR is merged |
let's do this together, I created branch-0.1 just now, so let's see how we can move forward |
How can we do this together any ideas? I still dont really know how this can be broken down. Looks like capa did it in more or less 2 or 3 big PRs |
I might need catch up a little bit then create a breakdown PR, give me 2-3 more days as I am busy in other stuffs as well :( |
Yup of course, no problem :) |
Okay so I looked at CAPI & CAPA and I would migrate in this order: Doc:PRs
Corresponding AWS PRs
@vincepri @ncdc @detiber @chuckha Can you please take a look and tell me if I forgot something major or if the order is a bad idea? :) Thx EDIT: I started with the first 2 bullet points. Looks like there will be a lot of followup PRs :) |
As part of the kubebuilder v2 work, we are completely removing the |
This goes pretty much hand in hand with v1alpha2 support: https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/proposals/20190610-machine-states-preboot-bootstrapping.md#sequence-diagram-user-creates-a-machine-with-kubeadm-bootstrapper We had some initial struggles getting the DNS and Netlify config for the docs working to support per-branch versioning which has caused us to be a bit behind on getting updated docs together, but we're planning on addressing that over the next couple of weeks. |
For the AWS provider we are also building a v1alpha1 to v1alpha2 migration tool to help with upgrades. We don't yet have the full upgrade process defined, but we are hoping to provide at least some tooling to automate parts of it, such as the type conversions. |
@detiber I phrased it wrong. I found the doc you referenced. I wasn't able to find the implementation of the new kubeadm bootstrapper, that's why I'm not sure it already exists. Another topic: I saw in PR kubernetes-sigs/cluster-api-provider-aws#994 the examples directory was deleted completely. Is there already an alternative? How are you currently testing your code on master? I guess you are using clusterctl from the cluster-api repository with the pivot workflow? |
Ah, the bootstrap provider is here: https://github.com/kubernetes-sigs/cluster-api-bootstrap-provider-kubeadm
Not yet, that is still an outstanding todo for us.
Quite a few folks are using Tilt to facilitate testing. An example setup is here: https://github.com/chuckha/capi-dev
Not currently, clusterctl is one of the things that still needs to be updated: kubernetes-sigs/cluster-api#1187 |
@detiber Thx for the informations ;) |
first we can go Ubuntu but I think we need continue to support centos and coreos later... |
Sorry my information is already outdated :). There is ongoing work to support CentOS (there is an issue with cloud-init) and multiple output formats in kubeadm (so also Ignition can be supported): kubernetes-retired/cluster-api-bootstrap-provider-kubeadm#121 I think OS support is something which probably is independent of the provider in the future. Seems like it's mostly dependent on the bootstrap provider, which is very nice. So the main question will be for which operating systems we have examples and which we are testing |
In my opinion that leaves us after this PR #455 with the following outstanding tasks for v1alpha2:
|
So only documentation open. I will create a PR the next few days and then hopefully we can create our first release based on v1alpha2 :) |
I opened a PR for documentation (#470). I would create another PR about supported permutations and some details after the release when I have some more time :) |
/kind feature
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
kubernetes-sigs/cluster-api#997 introduced some design doc
about v1alpha2, we should start to think about how to enable it
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
The text was updated successfully, but these errors were encountered: