Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v1alpha2]: prepare for v1alpha2 adoption #380

Closed
jichenjc opened this issue Jun 27, 2019 · 33 comments · Fixed by #474
Closed

[v1alpha2]: prepare for v1alpha2 adoption #380

jichenjc opened this issue Jun 27, 2019 · 33 comments · Fixed by #474
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@jichenjc
Copy link
Contributor

/kind feature

Describe the solution you'd like
[A clear and concise description of what you want to happen.]

kubernetes-sigs/cluster-api#997 introduced some design doc
about v1alpha2, we should start to think about how to enable it
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 27, 2019
@jichenjc jichenjc changed the title prepare for v1alphe2 adoption prepare for v1alpha2 adoption Jun 27, 2019
@sbueringer
Copy link
Member

I would start working on this
/assign @sbueringer

@jichenjc
Copy link
Contributor Author

yes, let's joint work on this, I am studying and preparing for this as well
so maybe we can make sub-tasks :)

@sbueringer
Copy link
Member

Yup okay. I need a few days to catch up to the current status of v1alpha2

@sbueringer
Copy link
Member

It would be nice if we could get this done first: #382

I have all the code here. Just have to merge the APIServerLB PR and one final on to implement kubeadm.

If we move to v1alpha2 first I fear I have to reimplement a lot of that stuff.

@jichenjc
Copy link
Contributor Author

we may need create another branch for v1alpha2 ?

@sbueringer
Copy link
Member

sbueringer commented Jul 27, 2019

Definitely. I'm just looking at AWS and I guess it will be more or less half rewrite and the other half copy&pase of existing OpenStack-specific code. That's why it would be nice to have the kubeadm change on master before start copying it on the v1alpha2 branch to avoid a lot of sync effort.

@texascloud
Copy link

I'm interested in helping out with this effort. LMK how I can help

@texascloud
Copy link

kubernetes-sigs/cluster-api-provider-aws#933 here's an issue with some guidance we could extrapolate from

@texascloud
Copy link

Consider either tagging this issue, or adding a [v1alpha2] prefix to the issue title so it may be more easily tracked, thanks!

@jichenjc jichenjc changed the title prepare for v1alpha2 adoption [v1alpha2]: prepare for v1alpha2 adoption Jul 30, 2019
@sbueringer
Copy link
Member

sbueringer commented Jul 30, 2019

I'm interested in helping out with this effort. LMK how I can help

That's nice to here, thx!

I'm not really sure how we should do the migration to v1alpha2. I think we should do it in the following order:

In my opinion we should finalize the HA control plane feature before, because it's really big and to combine this with a really complex refactoring to v1alpha2 makes it far more complicated.

Regarding the migration to v1alpha2 itself. If you look at the first PR in CAPA (892). It's a really huge & complicated refactoring. We probably should do both refactorings on another branch and "freeze" development of major features no master in the meantime to reduce the sync effort between both branches.

I'm not sure how we can split these two big refactorings in smaller subtasks. What do you think?

@texascloud
Copy link

If I get some time this week I'll look at how to break them down into smaller subtasks. 892 PR is so big it actually crashed my chrome tab lol. Lots of repo cleanup happening in that PR. I believe there is a different way to structure the repo based on kubebuilder best practices. Not sure if that should be done as part of the refactor. But if it isn't done as part of this refactor, I feel it's less likely to happen in the future.

Regarding Generate kubeadm configs for multi-node control plane on #382, I think that is complicated. In v1alpha2 we have a kubeadm Bootstrap provider being started, which should ideally handle all multi control plane cluster configuration. So the last point on #382 would be a stop-gap solution for v1alpha1 IMO

@jichenjc
Copy link
Contributor Author

so to summarize ,we seems need a release-0.1 branch for v1alpha1 and master for v1alpha2 branch?
or similar way but make maste rfor v1alpha1? looks like we need big refactory so on one branch we can't make those 2 happen concurrently...

any comments?

@sbueringer
Copy link
Member

@CamelCaseNotation You're absolutely right regarding #382. It's a stop-gap solution for v1alpha1 and until the kubeadm bootstrap in v1alpha2 is fully functional. Do you know if it's already implemented?

@jichenjc I would use a release-0.1 branch for v1alpha1 and then implement v1alpha2 on master one step/PR after another

@jichenjc
Copy link
Contributor Author

jichenjc commented Aug 2, 2019

@texascloud
Copy link

@sbueringer #382 can be merged into the v1alpha1 branch for completeness. kubeadm bootstrap work is underway, not yet fully functional AFAIK.

@jichenjc what I got from the comparison you linked is CAPO should develop additional infrastructure creation features besides only instances. I believe @sbueringer's load balancer and FIP creation logic would fit into this well

@sbueringer
Copy link
Member

sbueringer commented Aug 14, 2019

There is a nice guide here on how to update: https://github.com/kubernetes-sigs/cluster-api/blob/a3c9afc487b107b76d2ecf8a90ae3ff1c942b9c2/docs/developer/update-v1alpha1-provider-to-v1alpha2-infra-provider.md

I think after multi-node-control plane PR is merged & fixed we can start working on this. Not sure how the best mode would be. I would like to do it as fast as possible to not freeze the master branch too long. I would als create a release-0.1 branch for v1alpha1 as recommended in the doc

@sbueringer
Copy link
Member

I would start working on this now after the multi-node control plane PR is merged

@jichenjc
Copy link
Contributor Author

let's do this together, I created branch-0.1 just now, so let's see how we can move forward

@sbueringer
Copy link
Member

How can we do this together any ideas? I still dont really know how this can be broken down. Looks like capa did it in more or less 2 or 3 big PRs

@jichenjc
Copy link
Contributor Author

I might need catch up a little bit then create a breakdown PR, give me 2-3 more days as I am busy in other stuffs as well :(

@sbueringer
Copy link
Member

Yup of course, no problem :)

@sbueringer
Copy link
Member

sbueringer commented Aug 16, 2019

Okay so I looked at CAPI & CAPA and I would migrate in this order:

Doc:

PRs

Corresponding AWS PRs

@vincepri @ncdc @detiber @chuckha Can you please take a look and tell me if I forgot something major or if the order is a bad idea? :) Thx

EDIT: I started with the first 2 bullet points. Looks like there will be a lot of followup PRs :)

@detiber
Copy link
Member

detiber commented Aug 16, 2019

As part of the kubebuilder v2 work, we are completely removing the pkg directory in sigs.k8s.io/cluster-api, and plan to follow suit in sigs.k8s.io/cluster-api-provider-aws to provide better consistency with the default kubebuilder v2 layout.

@detiber
Copy link
Member

detiber commented Aug 16, 2019

Migrate to kubeadm bootstrapper (TODO: couldn't find any information about that)

This goes pretty much hand in hand with v1alpha2 support: https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/proposals/20190610-machine-states-preboot-bootstrapping.md#sequence-diagram-user-creates-a-machine-with-kubeadm-bootstrapper

We had some initial struggles getting the DNS and Netlify config for the docs working to support per-branch versioning which has caused us to be a bit behind on getting updated docs together, but we're planning on addressing that over the next couple of weeks.

@detiber
Copy link
Member

detiber commented Aug 16, 2019

For the AWS provider we are also building a v1alpha1 to v1alpha2 migration tool to help with upgrades. We don't yet have the full upgrade process defined, but we are hoping to provide at least some tooling to automate parts of it, such as the type conversions.

@sbueringer
Copy link
Member

sbueringer commented Aug 16, 2019

Migrate to kubeadm bootstrapper (TODO: couldn't find any information about that)

This goes pretty much hand in hand with v1alpha2 support: https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/proposals/20190610-machine-states-preboot-bootstrapping.md#sequence-diagram-user-creates-a-machine-with-kubeadm-bootstrapper

We had some initial struggles getting the DNS and Netlify config for the docs working to support per-branch versioning which has caused us to be a bit behind on getting updated docs together, but we're planning on addressing that over the next couple of weeks.

@detiber I phrased it wrong. I found the doc you referenced. I wasn't able to find the implementation of the new kubeadm bootstrapper, that's why I'm not sure it already exists.

Another topic: I saw in PR kubernetes-sigs/cluster-api-provider-aws#994 the examples directory was deleted completely. Is there already an alternative? How are you currently testing your code on master? I guess you are using clusterctl from the cluster-api repository with the pivot workflow?

@detiber
Copy link
Member

detiber commented Aug 16, 2019

Ah, the bootstrap provider is here: https://github.com/kubernetes-sigs/cluster-api-bootstrap-provider-kubeadm

Another topic: I saw in PR kubernetes-sigs/cluster-api-provider-aws#994 the examples directory was deleted completely. Is there already an alternative?

Not yet, that is still an outstanding todo for us.

How are you currently testing your code on master?

Quite a few folks are using Tilt to facilitate testing. An example setup is here: https://github.com/chuckha/capi-dev

I guess you are using clusterctl from the cluster-api repository with the pivot workflow?

Not currently, clusterctl is one of the things that still needs to be updated: kubernetes-sigs/cluster-api#1187

@sbueringer
Copy link
Member

@detiber Thx for the informations ;)

@jichenjc
Copy link
Contributor Author

jichenjc commented Aug 19, 2019

  * We have to figure out which operating systems are supported with v1alpha2. kubeadm bootstrapper seems to have a dependency on cloud-init. (sbueringer: I would be okay with just supporting Ubuntu from now on)

first we can go Ubuntu but I think we need continue to support centos and coreos later...
openshift which potential use this (forked) may be interested in coreos.... if we can, we may contribut to kubeadm provider later on coreos support

@sbueringer
Copy link
Member

  * We have to figure out which operating systems are supported with v1alpha2. kubeadm bootstrapper seems to have a dependency on cloud-init. (sbueringer: I would be okay with just supporting Ubuntu from now on)

first we can go Ubuntu but I think we need continue to support centos and coreos later...
openshift which potential use this (forked) may be interested in coreos.... if we can, we may contribut to kubeadm provider later on coreos support

Sorry my information is already outdated :). There is ongoing work to support CentOS (there is an issue with cloud-init) and multiple output formats in kubeadm (so also Ignition can be supported): kubernetes-retired/cluster-api-bootstrap-provider-kubeadm#121

I think OS support is something which probably is independent of the provider in the future. Seems like it's mostly dependent on the bootstrap provider, which is very nice. So the main question will be for which operating systems we have examples and which we are testing

@sbueringer
Copy link
Member

sbueringer commented Sep 1, 2019

In my opinion that leaves us after this PR #455 with the following outstanding tasks for v1alpha2:

@sbueringer
Copy link
Member

So only documentation open. I will create a PR the next few days and then hopefully we can create our first release based on v1alpha2 :)

@sbueringer
Copy link
Member

I opened a PR for documentation (#470). I would create another PR about supported permutations and some details after the release when I have some more time :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
5 participants