Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable cluster-api can manage cluster for different k8s distributions #853

Closed
gyliu513 opened this issue Mar 25, 2019 · 24 comments
Closed
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@gyliu513
Copy link
Contributor

/kind feature

Describe the solution you'd like
[A clear and concise description of what you want to happen.]

Currently, the cluster-api can only create native k8s clusters but do not support other k8s distributions, like redhat openshift, IBM Cloud Private etc, does cluster-api has some plan to support let customer define what kind of k8s distribution cluster that they want to create.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 25, 2019
@gyliu513
Copy link
Contributor Author

@jichenjc
Copy link
Contributor

looks like openshift already have something, I don't know the detail though...
https://github.com/openshift/cluster-operator

@gyliu513
Copy link
Contributor Author

Thanks @jichenjc, seems https://github.com/openshift/cluster-operator is trying to install openshift on AWS, but here I'm wondering can cluster-api provide a generic way to provision any k8s distributions in one cloud provider, am I missing anything?

@enxebre
Copy link
Member

enxebre commented Mar 25, 2019

@jichenjc https://github.com/openshift/cluster-operator is deprecated.
We use https://github.com/openshift/machine-api-operator to enable the machine API in a given cluster (i.e it makes the machine CRDs and controllers available in an existing cluster) and we plan to gradually adopt other parts of the upstream API as they get more matured. For OpenShift today the initial bootstrapping workflow is driven by https://github.com/openshift/installer ( there's no dependency on clusterctl).
There's a proposal for decoupling the API which would hopefully help to adopt it in a gradual fashion for different distributions https://docs.google.com/document/d/1pzXtwYWRsOzq5Ftu03O5FcFAlQE26nD3bjYBPenbhjg/edit#heading=h.vd1w04ud44q3

@detiber
Copy link
Member

detiber commented Mar 25, 2019

@gyliu513 this is one of the things that we are looking to address post-v1alpha1.

/milestone Next

@k8s-ci-robot k8s-ci-robot added this to the Next milestone Mar 25, 2019
@gyliu513
Copy link
Contributor Author

@detiber does there are any discussion or documents that I can refer? Also want to check how can I contribute to this as well.

@detiber
Copy link
Member

detiber commented Mar 25, 2019

Since there have been many varied discussions and proposals around this (and other design topics) we wanted to start with gaining consensus around What Cluster API is and should be prior to diving too deep into particular proposals (start with high level alignment prior to trying to get low level alignment around design implementations).

This is something that we'll be discussing a the Cluster API meeting this week. Since the meeting time is not convenient for all contributors, we plan on having a broader discussion using the sig-cluster-lifecycle mailing list as well.

@gyliu513
Copy link
Contributor Author

that make sense, thanks @detiber , look forward to the mail list discussion ;-)

@jichenjc
Copy link
Contributor

@enxebre this is very helpful to me , and actually I am trying to contribute openshift on openstack as well :)

jichenjc added a commit to jichenjc/cluster-api that referenced this issue Mar 26, 2019
@gyliu513 gyliu513 changed the title Enable cluster-api can create cluster for different k8s distributions Enable cluster-api can manage cluster for different k8s distributions Mar 27, 2019
jichenjc added a commit to jichenjc/cluster-api that referenced this issue Mar 27, 2019
k8s-ci-robot pushed a commit that referenced this issue Mar 27, 2019
serbrech pushed a commit to serbrech/cluster-api that referenced this issue Apr 8, 2019
@gyliu513
Copy link
Contributor Author

gyliu513 commented Apr 19, 2019

After more thinking, I think this task may belong to different cloud providers.

We can take OpenStack Cloud Provider as an example, this provider is using user-data to do post install, and the user-data will help to install Kubernetes cluster, so we may need to enhance user-data to support installing different Kubernetes distributions, like IBM Cloud Private, OpenShift, K3S etc.

@jichenjc @detiber @vincepri WDYT? Thanks!

@gyliu513
Copy link
Contributor Author

FYI @xunpan

@gyliu513
Copy link
Contributor Author

gyliu513 commented Apr 19, 2019

Seems clusterctl or machine spec still need to be enhanced to support specifying what kind of k8s distribution the end user want to install.

items:
- apiVersion: "cluster.k8s.io/v1alpha1"
  kind: Machine
  metadata:
    generateName: liugya-master-
    labels:
      set: master
  spec:
    providerSpec:
      value:
        apiVersion: "openstackproviderconfig/v1alpha1"
        kind: "OpenstackProviderSpec"
        flavor: m1.xlarge
        image: KVM-Ubt18.04-Srv-x64
        sshUserName: cloudusr
        keyName: cluster-api-provider-openstack
        availabilityZone: nova
        networks:
        - uuid: e2d9ead6-759b-4592-873d-981d3db07c86
        floatingIP: 9.20.206.22
        securityGroups:
        - uuid: 97acf9d4-e5bf-4fff-a2c0-be0b04fbc44b
        userDataSecret:
          name: master-user-data
          namespace: openstack-provider-system
        trunk: false
    versions:
      distribution: Kubernetes
      kubelet: 1.14.0
      controlPlane: 1.14.0

If we want to install IBM Cloud Private, it can be:

    versions:
      distribution: IBM Cloud Private
      kubelet: 3.2
      controlPlane: 3.2

Spec change would be as follows:

/// [MachineVersionInfo]
type MachineVersionInfo struct {
        // Kubernetes Distribution
	Distribution string `json:"distribution"`

	// Kubelet is the semantic version of kubelet to run
	Kubelet string `json:"kubelet"`

	// ControlPlane is the semantic version of the Kubernetes control plane to
	// run. This should only be populated when the machine is a
	// control plane.
	// +optional
	ControlPlane string `json:"controlPlane,omitempty"`
}

@jichenjc
Copy link
Contributor

actually, this is already included in the scope @gyliu513

https://github.com/vincepri/cluster-api/blob/ea65610df474df310d8f48bf32c41899cd750659/docs/scope-and-objectives.md

Control plane:

Self-provisioned: A Kubernetes control plane consisting of pods or machines wholly managed by a single Cluster API deployment.
External: A control plane offered and controlled by some system other than Cluster API (e.g., GKE, AKS, EKS, IKS).

@jichenjc
Copy link
Contributor

the proposed looks good, just curious whether the Kubelet version is still valid for external vendor?
for vendor they usually don't expose those info to end user/operator?

@detiber
Copy link
Member

detiber commented Apr 19, 2019

After more thinking, I think this task may belong to different cloud providers.

We can take OpenStack Cloud Provider as an example, this provider is using user-data to do post install, and the user-data will help to install Kubernetes cluster, so we may need to enhance user-data to support installing different Kubernetes distributions, like IBM Cloud Private, OpenShift, K3S etc.

@gyliu513 for v1alpha1, yes that is the case. However for v1alpha2+ we are looking at making the "bootstrapping config" more common (though still pluggable) rather than requiring each provider to implement their own.

@gyliu513
Copy link
Contributor Author

@detiber thanks for the info. As I did not attend the Cluster API meeting, can you please share some info here:

  1. Seems v1alpha1 (0.1.0) was now released, can I make some code change in master branch to enable v1alpha1 support this?
  2. What is the plan of v1alpha2? You mentioned Enable cluster-api can manage cluster for different k8s distributions #853 (comment) we will have some discussion for this in google group sig-cluster-life-cycle, but I did not found such discussion there, am I missing anything? ;-)

@detiber
Copy link
Member

detiber commented Apr 19, 2019

@gyliu513 you can find more information about the post-v1alpha1 workstreams here: https://discuss.kubernetes.io/t/workstreams/5879/4

@jichenjc
Copy link
Contributor

thanks for the info, this is really help to get a overall picture @detiber

@ncdc
Copy link
Contributor

ncdc commented May 31, 2019

We are working on a proposal that we'll share soon to separate machine infrastructure provisioning from node bootstrapping. That will allow users to pick one provider for infrastructure (e.g. IBM Cloud) and separate providers for bootstrapping (Kubernetes via kubeadm, OpenShift, Rancher, etc).

@jichenjc
Copy link
Contributor

Thanks, will that be posted in Forum or somewhere soon? also, I assume that will be a set of provider related changes as well (not only in cluster-api itself) right?

@ncdc
Copy link
Contributor

ncdc commented May 31, 2019

We are working as quickly as we can to produce an initial draft of the proposal. At this point, I would expect it early next week. We will post to the discuss forum and Slack and anyplace else that makes sense.

Yes, this will require transforming "providers" as they exist in v1alpha1 from what they currently are (infrastructure & bootstrapping) into just infrastructure providers, and creating new bootstrap providers.

@timothysc timothysc modified the milestones: Next, v1alpha2 Jun 14, 2019
@timothysc timothysc added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jun 14, 2019
@ncdc
Copy link
Contributor

ncdc commented Jun 14, 2019

The proposal is #997.

@ncdc
Copy link
Contributor

ncdc commented Aug 19, 2019

The code in master now supports specifying a bootstrap provider separately from infrastructure provider.

/close

@k8s-ci-robot
Copy link
Contributor

@ncdc: Closing this issue.

In response to this:

The code in master now supports specifying a bootstrap provider separately from infrastructure provider.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

7 participants