Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Add install script #188

Merged
merged 3 commits into from
Feb 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 16 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[![Go Report Card](https://goreportcard.com/badge/github.com/kubestellar/kubeflex)](https://goreportcard.com/report/github.com/kubestellar/kubeflex)
[![Go Report Card](https://goreportcard.com/badge/github.com/kubestellar/kubeflex)](https://goreportcard.com/report/github.com/kubestellar/kubeflex)
[![GitHub release](https://img.shields.io/github/release/kubestellar/kubeflex/all.svg?style=flat-square)](https://github.com/kubestellar/kubeflex/releases)
[![CI](https://github.com/kubestellar/kubeflex/actions/workflows/ci.yaml/badge.svg)](https://github.com/kubestellar/kubeflex/actions/workflows/ci.yaml)
[![Vulnerabilities](https://sonarcloud.io/api/project_badges/measure?project=kubestellar_kubeflex&metric=vulnerabilities)](https://sonarcloud.io/summary/new_code?id=kubestellar_kubeflex)
Expand All @@ -16,21 +16,27 @@ A flexible and scalable platform for running Kubernetes control plane APIs.
- dedicated DB for each API server,
- etcd DB or Kine + Postgres DB
- Flexibility in choice of API Server build:
- upstream Kube (e.g. `registry.k8s.io/kube-apiserver:v1.27.1`),
- upstream Kube (e.g. `registry.k8s.io/kube-apiserver:v1.27.1`),
- trimmed down API Server builds (e.g. [multicluster control plane](https://github.com/open-cluster-management-io/multicluster-controlplane))
- Single binary CLI for improved user experience:
- initialize, install operator, manage lifecycle of control planes and contexts.

## Installation

[kind](https://kind.sigs.k8s.io) and [kubectl](https://kubernetes.io/docs/tasks/tools/) are
[kind](https://kind.sigs.k8s.io) and [kubectl](https://kubernetes.io/docs/tasks/tools/) are
required. A kind hosting cluster is created automatically by the kubeflex CLI. You may
also install KubeFlex on other Kube distros, as long as they support an nginx ingress
with SSL passthru, or on OpenShift. See the [User's Guide](docs/users.md) for more details.

Download the latest kubeflex CLI binary release for your OS/Architecture from the
Download the latest kubeflex CLI binary release for your OS/Architecture from the
[release page](https://github.com/kubestellar/kubeflex/releases) and copy it
to `/usr/local/bin` or another location in your `$PATH`.
to `/usr/local/bin` using the following command:

```shell
sudo su <<EOF
bash <(curl -s https://raw.githubusercontent.com/kubestellar/kubeflex/main/scripts/install-kubeflex.sh) --ensure-folder /usr/local/bin --strip-bin
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it may be best to add -L as well to curl to follow redirects

EOF
```

If you have [Homebrew](https://brew.sh), use the following commands to install kubeflex:

Expand All @@ -47,7 +53,7 @@ brew upgrade kubeflex

## Quickstart

Create the hosting kind cluster with ingress controller and install
Create the hosting kind cluster with ingress controller and install
the kubeflex operator:

```shell
Expand All @@ -56,7 +62,7 @@ kflex init --create-kind

Create a control plane:

```shell
```shell
kflex create cp1
```

Expand All @@ -81,7 +87,7 @@ To go back to the hosting cluster context, use the `ctx` command:
kflex ctx
```

To switch back to a control plane context, use the
To switch back to a control plane context, use the
`ctx <control plane name>` command, e.g:

```shell
Expand All @@ -97,9 +103,9 @@ kflex delete cp1
## Next Steps

Read the [User's Guide](docs/users.md) to learn more about using KubeFlex for your project
and how to create and interact with different types of control planes, such as
and how to create and interact with different types of control planes, such as
[vcluster](https://www.vcluster.com) and [Open Cluster Management](https://github.com/open-cluster-management-io/multicluster-controlplane).

## Architecture

![image info](./docs/images/kubeflex-high-level-arch.png)
![image info](./docs/images/kubeflex-high-level-arch.png)
98 changes: 53 additions & 45 deletions docs/users.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

## Installation

[kind](https://kind.sigs.k8s.io) and [kubectl](https://kubernetes.io/docs/tasks/tools/) are
required. Note that we plan to add support for other Kube distros. A hosting kind cluster
[kind](https://kind.sigs.k8s.io) and [kubectl](https://kubernetes.io/docs/tasks/tools/) are
required. Note that we plan to add support for other Kube distros. A hosting kind cluster
is created automatically by the kubeflex CLI.

Download the latest kubeflex CLI binary release for your OS/Architecture from the
Download the latest kubeflex CLI binary release for your OS/Architecture from the
[release page](https://github.com/kubestellar/kubeflex/releases) and copy it
to `/usr/local/bin` or another location in your `$PATH`. For example, on linux amd64:

Expand All @@ -18,6 +18,14 @@ tar xzvf $(basename $LATEST_RELEASE_URL)
sudo install -o root -g root -m 0755 bin/kflex /usr/local/bin/kflex
```

Alternatively use the the single command below which will automatically detect the host OS type and architecture:

```shell
sudo su <<EOF
bash <(curl -s https://raw.githubusercontent.com/kubestellar/kubeflex/main/scripts/install-kubeflex.sh) --ensure-folder /usr/local/bin --strip-bin
EOF
```

If you have [Homebrew](https://brew.sh), use the following commands to install kubeflex:

```shell
Expand All @@ -37,11 +45,11 @@ kflex init --create-kind
## Install KubeFlex on an existing cluster

You can install KubeFlex on an existing cluster with nginx ingress configured for SSL passthru,
or on a OpenShift cluster. At this time, we have only tested this option with Kind and OpenShift.
or on a OpenShift cluster. At this time, we have only tested this option with Kind and OpenShift.

### Installing on kind

To create a kind cluster with nginx ingress, follow the instructions [here](https://kind.sigs.k8s.io/docs/user/ingress/).
To create a kind cluster with nginx ingress, follow the instructions [here](https://kind.sigs.k8s.io/docs/user/ingress/).
Once you have your ingress running, you will need to configure nginx ingress for SSL passthru. Run the command:

```shell
Expand All @@ -65,7 +73,7 @@ kflex init

## Installing KubeFlex with helm

To install KubeFlex on a cluster that already has nginx ingress with SSL passthru enabled,
To install KubeFlex on a cluster that already has nginx ingress with SSL passthru enabled,
you can use helm instead of the KubeFlex CLI. First, create the `kubeflex-system` namespace
and install the shared database with the following commands:

Expand All @@ -75,11 +83,11 @@ helm upgrade --install postgres oci://registry-1.docker.io/bitnamicharts/postgre
--namespace kubeflex-system \
--version 13.1.5 \
--set primary.extendedConfiguration=max_connections=1000 \
--set primary.priorityClassName=system-node-critical
--set primary.priorityClassName=system-node-critical
```

Note that at this time we have tested only with version 13.1.5 of the chart.
Then, check what is the [latest release version tag](https://github.com/kubestellar/kubeflex/releases)
Then, check what is the [latest release version tag](https://github.com/kubestellar/kubeflex/releases)
and install the KubeFlex operator with the command:

```shell
Expand All @@ -90,7 +98,7 @@ helm upgrade --install kubeflex-operator oci://ghcr.io/kubestellar/kubeflex/char
--set externalPort=9443
```

The `kubeflex-system` namespace is required for installing and running KubeFlex. Do not use
The `kubeflex-system` namespace is required for installing and running KubeFlex. Do not use
any other namespace for this purpose.

### Installing KubeFlex with helm on OpenShift
Expand Down Expand Up @@ -127,7 +135,7 @@ helm upgrade --install kubeflex-operator oci://ghcr.io/kubestellar/kubeflex/char
--set isOpenShift=true
```

Finally, add the the OpenShift anyuid SCC to the KubeFlex service account (note that this is done
Finally, add the the OpenShift anyuid SCC to the KubeFlex service account (note that this is done
automatically by `kflex init` if you are using the kflex CLI installer):

```shell
Expand All @@ -137,22 +145,22 @@ oc adm policy add-scc-to-user anyuid -z kubeflex-controller-manager -n kubeflex-
## Upgrading Kubeflex

The KubeFlex CLI can be upgraded with `brew upgrade kubeflex` (for brew installs). For linux
systems, manually download and update the binary. To upgrade the KubeFlex controller, just
systems, manually download and update the binary. To upgrade the KubeFlex controller, just
upgrade the helm chart according to the instructions for [kubernetes](#installing-kubeflex-with-helm)
or for [OpenShift](#installing-kubeflex-with-helm-on-openshift).

Note that for a kind test/dev installation, the simplest approach to get a fresh install
after updating the 'kflex' binary is to use `kind delete --name kubeflex` and re-running
`kflex init --create-kind`.
Note that for a kind test/dev installation, the simplest approach to get a fresh install
after updating the 'kflex' binary is to use `kind delete --name kubeflex` and re-running
`kflex init --create-kind`.

## Use a different DNS service

To use a different domain for DNS resolution, you can specify the `--domain` option when
you run `kflex init`. This domain should point to the IP address of your ingress controller,
which handles the routing of requests to different control plane instances based on the hostname.
A wildcard DNS service is recommended, so that any subdomain of your domain (such as *.<domain>)
To use a different domain for DNS resolution, you can specify the `--domain` option when
you run `kflex init`. This domain should point to the IP address of your ingress controller,
which handles the routing of requests to different control plane instances based on the hostname.
A wildcard DNS service is recommended, so that any subdomain of your domain (such as *.<domain>)
will resolve to the same IP address. The default domain in KubeFlex is localtest.me, which is a
wildcard DNS service that always resolves to 127.0.0.1.
wildcard DNS service that always resolves to 127.0.0.1.
For example, `cp1.localtest.me` and `cp2.localtest.me` will both resolve to your local machine.
Note that this option is ignored if you are installing on OpenShift.

Expand All @@ -162,7 +170,7 @@ You can create a new control plane using the KubeFlex CLI or using any Kubernete

To create a new control plane with name `cp1` using the KubeFlex CLI:

```shell
```shell
kflex create cp1
```

Expand All @@ -182,7 +190,7 @@ to switch the context back to the hosting cluster context, you may use the `ctx`
kflex ctx
```

To switch back to a control plane context, use the
To switch back to a control plane context, use the
`ctx <control plane name>` command, e.g:

```shell
Expand Down Expand Up @@ -242,10 +250,10 @@ kubectl get secrets -n ${NAMESPACE} admin-kubeconfig -o jsonpath='{.data.kubecon

### Accessing the control plane from within a kind cluster

For control plane of type k8s, the Kube API client can only use the 127.0.0.1 address. The DNS name
For control plane of type k8s, the Kube API client can only use the 127.0.0.1 address. The DNS name
`<control-plane-name>.localtest.me`` is convenient for local test and dev but always resolves to 127.0.0.1, that does not work in a container. For accessing the control plane from within the KubeFlex hosting
cluster, you may use the controller manager Kubeconfig, which is maintained in the secret with name
`cm-kubeconfig` in the namespace hosting the control plane, or you may use the Kubeconfig in the
`cm-kubeconfig` in the namespace hosting the control plane, or you may use the Kubeconfig in the
`admin-kubeconfig` secret with the address for the server `https://<control-plane-name>.<control-plane-namespace>:9443`.

To access the control plane API server from another kind cluster on the same docker network, you
Expand All @@ -258,7 +266,7 @@ the URL for the server as `https://kubeflex-control-plane:<nodeport>`
At this time KubFlex supports the following control plane types:

- k8s: this is the stock Kube API server with a subset of controllers running in the controller manager.
- ocm: this is the [Open Cluster Management Multicluster Control Plane](https://github.com/open-cluster-management-io/multicluster-controlplane), which provides a basic set of capabilities such as
- ocm: this is the [Open Cluster Management Multicluster Control Plane](https://github.com/open-cluster-management-io/multicluster-controlplane), which provides a basic set of capabilities such as
clusters registration and support for the [`ManifestWork` API](https://open-cluster-management.io/concepts/manifestwork/).
- vcluster: this is based on the [vcluster project](https://www.vcluster.com) and provides the ability to create pods in the hosting namespace of the hosting cluster.
- host: this control plane type exposes the underlying hosting cluster with the same control plane abstraction
Expand All @@ -267,7 +275,7 @@ used by the other control plane types.
## Control Plane Backends

KubeFlex roadmap aims to provide different types of backends: shared, dedicated, and for
each type the ability to choose if etcd or sql. At this time only the following
each type the ability to choose if etcd or sql. At this time only the following
combinations are supported based on control plane type:

- k8s: shared postgresql
Expand Down Expand Up @@ -339,7 +347,7 @@ used to register managed clusters:

```shell
$ clusteradm get token --use-bootstrap-token
clusteradm join --hub-token <some value> --hub-apiserver https://cp3.localtest.me:9443/ --cluster-name <cluster_name>
clusteradm join --hub-token <some value> --hub-apiserver https://cp3.localtest.me:9443/ --cluster-name <cluster_name>
```

The command returns the command to run on the managed cluster (actual token value not shown in example).
Expand Down Expand Up @@ -440,7 +448,7 @@ kflex ctx kind-cluster1
```

```shell
$ kubectl get deployments.apps
$ kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 20s
```
Expand Down Expand Up @@ -473,7 +481,7 @@ nginx 1/1 Running 0 24s
Access the pod logs:

```shell
$ kubectl logs nginx
$ kubectl logs nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
...
Expand All @@ -496,7 +504,7 @@ kflex ctx
```

```shell
$ kubectl get pods -n cp2-system
$ kubectl get pods -n cp2-system
NAME READY STATUS RESTARTS AGE
coredns-64c4b4d78f-2w9bx-x-kube-system-x-vcluster 1/1 Running 0 6m58s
nginx-x-default-x-vcluster 1/1 Running 0 4m26s
Expand All @@ -507,7 +515,7 @@ The nginx pod is the one with the name `nginx-x-default-x-vcluster`.

## Post-create hooks

With post-create hooks you can automate applying kubernetes templates on the hosting cluster or on
With post-create hooks you can automate applying kubernetes templates on the hosting cluster or on
a hosted control plane right after the creation of a control plane. Some relevant use cases are:

- Applying OpenShift CRDs on a control plane to be used as a Workload Description Space (WDS) for deplying
Expand Down Expand Up @@ -548,9 +556,9 @@ spec:
backoffLimit: 1
```

This hook will launch a job in the same namespace of the control plane that will print
This hook will launch a job in the same namespace of the control plane that will print
"Hello World" to the standard output. Typically, a hook runs a job that by default
interacts with the hosting cluster API server. To make the job interact with the hosted
interacts with the hosting cluster API server. To make the job interact with the hosted
control plane API server you can mount the secret with the in-cluster kubeconfig
for that API server. For example, for a control plane of type `k8s` you can define
a volume for a secret as follows:
Expand All @@ -567,11 +575,11 @@ Then, you can mount the volume and define the `KUBECONFIG` env variable as follo
```yaml
env:
- name: KUBECONFIG
value: "/etc/kube/kubeconfig-incluster"
value: "/etc/kube/kubeconfig-incluster"
volumeMounts:
- name: kubeconfig
mountPath: "/etc/kube"
readOnly: true
readOnly: true
```

A complete example for installing OpenShift CRDs on a control plane is available
Expand All @@ -593,13 +601,13 @@ Currently avilable built-in objects are:

- "{{.Namespace}}" - the namespace hosting the control plane
- "{{.ControlPlaneName}}" - the name of the control plane
- "{{.HookName}}" - the name of the hook.
- "{{.HookName}}" - the name of the hook.

### Labels propagation

There are scenarios where you may need to setup labels on control planes based on the
features that the control plane acquires after the hook runs. For example you may want
to label a control plane where the OpenShift CRDs have been applied as a control plane
There are scenarios where you may need to setup labels on control planes based on the
features that the control plane acquires after the hook runs. For example you may want
to label a control plane where the OpenShift CRDs have been applied as a control plane
with OpenShift flavor.

To propagate labels, simply set the labels on the PostCreateHook as shown in the example
Expand All @@ -615,7 +623,7 @@ kflex ctx
kubectl apply -f <hook-file.yaml> # e.g. kubectl apply -f hello.yaml
```

You can then reference the hook by name when you create a new control plane.
You can then reference the hook by name when you create a new control plane.

With kflex CLI (you can use --postcreate-hook or -p):

Expand All @@ -641,14 +649,14 @@ EOF

## Initial Context

The KubeFlex CLI (kflex) relies on the extensions field in the kubeconfig
file to store the initial context of the hosting cluster. This context is
needed for kflex to switch back to the hosting cluster when performing
The KubeFlex CLI (kflex) relies on the extensions field in the kubeconfig
file to store the initial context of the hosting cluster. This context is
needed for kflex to switch back to the hosting cluster when performing
lifecycle operations.

If the extensions field is deleted or overwritten by other apps, you
need to restore it manually in the kubeconfig file. Otherwise, kflex
context switching may not work properly. Here is an example of an
If the extensions field is deleted or overwritten by other apps, you
need to restore it manually in the kubeconfig file. Otherwise, kflex
context switching may not work properly. Here is an example of an
extension for a hosting cluster with the default context name `kind-kubeflex`:

```yaml
Expand Down
Loading