-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable cluster-api can manage cluster for different k8s distributions #853
Comments
looks like openshift already have something, I don't know the detail though... |
Thanks @jichenjc, seems https://github.com/openshift/cluster-operator is trying to install openshift on AWS, but here I'm wondering can cluster-api provide a generic way to provision any k8s distributions in one cloud provider, am I missing anything? |
@jichenjc https://github.com/openshift/cluster-operator is deprecated. |
@gyliu513 this is one of the things that we are looking to address post-v1alpha1. /milestone Next |
@detiber does there are any discussion or documents that I can refer? Also want to check how can I contribute to this as well. |
Since there have been many varied discussions and proposals around this (and other design topics) we wanted to start with gaining consensus around What Cluster API is and should be prior to diving too deep into particular proposals (start with high level alignment prior to trying to get low level alignment around design implementations). This is something that we'll be discussing a the Cluster API meeting this week. Since the meeting time is not convenient for all contributors, we plan on having a broader discussion using the sig-cluster-lifecycle mailing list as well. |
that make sense, thanks @detiber , look forward to the mail list discussion ;-) |
@enxebre this is very helpful to me , and actually I am trying to contribute openshift on openstack as well :) |
https://github.com/openshift/machine-api-operator/ is the up to date one per issue kubernetes-sigs#853
https://github.com/openshift/machine-api-operator/ is the up to date one per issue kubernetes-sigs#853
https://github.com/openshift/machine-api-operator/ is the up to date one per issue #853
https://github.com/openshift/machine-api-operator/ is the up to date one per issue kubernetes-sigs#853
After more thinking, I think this task may belong to different cloud providers. We can take OpenStack Cloud Provider as an example, this provider is using |
FYI @xunpan |
Seems items:
- apiVersion: "cluster.k8s.io/v1alpha1"
kind: Machine
metadata:
generateName: liugya-master-
labels:
set: master
spec:
providerSpec:
value:
apiVersion: "openstackproviderconfig/v1alpha1"
kind: "OpenstackProviderSpec"
flavor: m1.xlarge
image: KVM-Ubt18.04-Srv-x64
sshUserName: cloudusr
keyName: cluster-api-provider-openstack
availabilityZone: nova
networks:
- uuid: e2d9ead6-759b-4592-873d-981d3db07c86
floatingIP: 9.20.206.22
securityGroups:
- uuid: 97acf9d4-e5bf-4fff-a2c0-be0b04fbc44b
userDataSecret:
name: master-user-data
namespace: openstack-provider-system
trunk: false
versions:
distribution: Kubernetes
kubelet: 1.14.0
controlPlane: 1.14.0 If we want to install IBM Cloud Private, it can be: versions:
distribution: IBM Cloud Private
kubelet: 3.2
controlPlane: 3.2 Spec change would be as follows: /// [MachineVersionInfo]
type MachineVersionInfo struct {
// Kubernetes Distribution
Distribution string `json:"distribution"`
// Kubelet is the semantic version of kubelet to run
Kubelet string `json:"kubelet"`
// ControlPlane is the semantic version of the Kubernetes control plane to
// run. This should only be populated when the machine is a
// control plane.
// +optional
ControlPlane string `json:"controlPlane,omitempty"`
} |
actually, this is already included in the scope @gyliu513 Control plane:
|
the proposed looks good, just curious whether the Kubelet version is still valid for external vendor? |
@gyliu513 for v1alpha1, yes that is the case. However for v1alpha2+ we are looking at making the "bootstrapping config" more common (though still pluggable) rather than requiring each provider to implement their own. |
@detiber thanks for the info. As I did not attend the Cluster API meeting, can you please share some info here:
|
@gyliu513 you can find more information about the post-v1alpha1 workstreams here: https://discuss.kubernetes.io/t/workstreams/5879/4 |
thanks for the info, this is really help to get a overall picture @detiber |
We are working on a proposal that we'll share soon to separate machine infrastructure provisioning from node bootstrapping. That will allow users to pick one provider for infrastructure (e.g. IBM Cloud) and separate providers for bootstrapping (Kubernetes via kubeadm, OpenShift, Rancher, etc). |
Thanks, will that be posted in Forum or somewhere soon? also, I assume that will be a set of provider related changes as well (not only in cluster-api itself) right? |
We are working as quickly as we can to produce an initial draft of the proposal. At this point, I would expect it early next week. We will post to the discuss forum and Slack and anyplace else that makes sense. Yes, this will require transforming "providers" as they exist in v1alpha1 from what they currently are (infrastructure & bootstrapping) into just infrastructure providers, and creating new bootstrap providers. |
The proposal is #997. |
The code in /close |
@ncdc: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind feature
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
Currently, the cluster-api can only create native k8s clusters but do not support other k8s distributions, like redhat openshift, IBM Cloud Private etc, does cluster-api has some plan to support let customer define what kind of k8s distribution cluster that they want to create.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
The text was updated successfully, but these errors were encountered: