From 5907dcb7f7f9b9c3a23a27c853ae8f56a48438cd Mon Sep 17 00:00:00 2001 From: Nahshon Unna-Tsameret Date: Sun, 23 Oct 2022 18:26:26 +0300 Subject: [PATCH] Add the quickstart details for KubeVirt Signed-off-by: Nahshon Unna-Tsameret --- docs/book/src/user/quick-start.md | 267 ++++++++++++++++++++++++++++-- 1 file changed, 253 insertions(+), 14 deletions(-) diff --git a/docs/book/src/user/quick-start.md b/docs/book/src/user/quick-start.md index 603ec4e31402..be43d80b7a51 100644 --- a/docs/book/src/user/quick-start.md +++ b/docs/book/src/user/quick-start.md @@ -61,7 +61,7 @@ a target [management cluster] on the selected [infrastructure provider]. The installation procedure depends on the version of kind; if you are planning to use the Docker infrastructure provider, please follow the additional instructions in the dedicated tab: - {{#tabs name:"install-kind" tabs:"Default,Docker"}} + {{#tabs name:"install-kind" tabs:"Default,Docker,KubeVirt"}} {{#tab Default}} Create the kind cluster: @@ -93,6 +93,57 @@ a target [management cluster] on the selected [infrastructure provider]. Then follow the instruction for your kind version using `kind create cluster --config kind-cluster-with-extramounts.yaml` to create the management cluster using the above file. + {{#/tab }} + {{#tab KubeVirt}} + + #### Create the Kind Cluster + [KubeVirt][KubeVirt] is a cloud native virtualization solution. The virtual machines we're going to create and use for + the workload cluster's nodes, are actually running within pods in the management cluster. In order to communicate with + the workload cluster's API server, we'll need to expose it. We are using Kind which is a limited environment. The + easiest way to expose the workload cluster's API server (a pod within a node running in a VM that is itself running + within a pod in the management cluster, that is running inside a docker container), is to use a LoadBalancer service. + + To allow using a LoadBalancer service, we can't use the kind's default CNI (kindnet), but we'll need to install + another CNI, like Calico. In order to do that, we'll need first to initiate the kind cluster with two modifications: + 1. Disable the default CNI + 2. Add the docker credentials to the cluster, to avoid the docker hub pull rate limit of the calico images; read more + about it in the [docker documentation](https://docs.docker.com/docker-hub/download-rate-limit/), and in the + [kind documentation](https://kind.sigs.k8s.io/docs/user/private-registries/#mount-a-config-file-to-each-node). + + Create a configuration file for kind. Please notice the docker config file path, and adjust it to your local setting: + ```bash + cat < kind-config.yaml + kind: Cluster + apiVersion: kind.x-k8s.io/v1alpha4 + networking: + # the default CNI will not be installed + disableDefaultCNI: true + nodes: + - role: control-plane + extraMounts: + - containerPath: /var/lib/kubelet/config.json + hostPath: + EOF + ``` + Now, create the kind cluster with the configuration file: + ```bash + kind create cluster --config=kind-config.yaml + ``` + Test to ensure the local kind cluster is ready: + ```bash + kubectl cluster-info + ``` + + #### Install the Calico CNI + Now we'll need to install a CNI. In this example, we're using calico, but other CNIs should work as well. Please see + [calico installation guide](https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico) + for more details (use the "Manifest" tab). Below is an example of how to install calico version v3.24.4. + + Use the Calico manifest to create the required resources; e.g.: + ```bash + kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.4/manifests/calico.yaml + ``` + {{#/tab }} {{#/tabs }} @@ -202,7 +253,7 @@ Additional documentation about experimental features can be found in [Experiment Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied before getting started with Cluster API. See below for the expected settings for common providers. -{{#tabs name:"tab-installation-infrastructure" tabs:"AWS,Azure,CloudStack,DigitalOcean,Docker,Equinix Metal,GCP,Hetzner,IBM Cloud,KubeKey,Kubevirt,Metal3,Nutanix,OCI,OpenStack,Outscale,VCD,vcluster,Virtink,vSphere"}} +{{#tabs name:"tab-installation-infrastructure" tabs:"AWS,Azure,CloudStack,DigitalOcean,Docker,Equinix Metal,GCP,Hetzner,IBM Cloud,KubeKey,KubeVirt,Metal3,Nutanix,OCI,OpenStack,Outscale,VCD,vcluster,Virtink,vSphere"}} {{#tab AWS}} Download the latest binary of `clusterawsadm` from the [AWS provider releases]. @@ -442,9 +493,61 @@ clusterctl init --infrastructure kubekey ``` {{#/tab }} -{{#tab Kubevirt}} +{{#tab KubeVirt}} + +Please visit the [KubeVirt project][KubeVirt provider] for more information. -Please visit the [Kubevirt project][Kubevirt provider]. +As described above, we want to use a LoadBalancer service in order to expose the workload cluster's API server. In the +example below, we will use [MetalLB](https://metallb.universe.tf/) solution to implement load balancing to our kind +cluster. Other solution should work as well. + +#### Install MetalLB for load balancing +Install MetalLB, as described [here](https://metallb.universe.tf/installation/#installation-by-manifest); for example: +```bash +METALLB_VER=$(curl "https://api.github.com/repos/metallb/metallb/releases/latest" | jq -r ".tag_name") +kubectl apply -f "https://raw.githubusercontent.com/metallb/metallb/${METALLB_VER}/config/manifests/metallb-native.yaml" +kubectl wait pods -n metallb-system -l app=metallb,component=controller --for=condition=Ready --timeout=10m +kubectl wait pods -n metallb-system -l app=metallb,component=speaker --for=condition=Ready --timeout=2m +``` + +Now, we'll create the `IPAddressPool` and the `L2Advertisement` custom resources. The script below creates the CRs with +the right addresses, that match to the kind cluster addresses: +```bash +GW_IP=$(docker network inspect -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}' kind) +NET_IP=$(echo ${GW_IP} | sed -E 's|^([0-9]+\.[0-9]+)\..*$|\1|g') +cat < Please visit the [KubeKey provider] for more information. {{#/tab }} -{{#tab Kubevirt}} - -A ClusterAPI compatible image must be available in your Kubevirt image library. For instructions on how to build a compatible image -see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html). +{{#tab KubeVirt}} -To see all required Kubevirt environment variables execute: ```bash -clusterctl generate cluster --infrastructure kubevirt --list-variables capi-quickstart +export CAPK_GUEST_K8S_VERSION="v1.23.10" +export CRI_PATH="/var/run/containerd/containerd.sock" +export NODE_VM_IMAGE_TEMPLATE="quay.io/capk/ubuntu-2004-container-disk:${CAPK_GUEST_K8S_VERSION}" ``` +Please visit the [KubeVirt project][KubeVirt provider] for more information. {{#/tab }} {{#tab Metal3}} @@ -1007,7 +1109,7 @@ For more information about prerequisites, credentials management, or permissions For the purpose of this tutorial, we'll name our cluster capi-quickstart. -{{#tabs name:"tab-clusterctl-config-cluster" tabs:"Docker, vcluster, others..."}} +{{#tabs name:"tab-clusterctl-config-cluster" tabs:"Docker, vcluster, KubeVirt, others..."}} {{#tab Docker}}