Skip to content

Commit

Permalink
Bump kubevirtci
Browse files Browse the repository at this point in the history
[8198e9c sync provider.sh between kind and kind-sriov](kubevirt/kubevirtci#587)
[8b1d599 Restore kind-1.19-sriov provider files](kubevirt/kubevirtci#695)
[bf9b729 Upgrade SR-IOV provider nodes image to k8s-1.22](kubevirt/kubevirtci#694)
[5a10f48 Add check-cluster-up script for KinD providers](kubevirt/kubevirtci#645)

Signed-off-by: kubevirt-bot <[email protected]>
  • Loading branch information
kubevirt-bot committed Oct 26, 2021
1 parent dd4ad88 commit c6629e7
Show file tree
Hide file tree
Showing 35 changed files with 2,403 additions and 65 deletions.
2 changes: 1 addition & 1 deletion cluster-up-sha.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
f7906a7e1dfeeb25fe3bac94f1852671aa2026b1
9853ae783af481217142f0330a494a48c07820df
107 changes: 67 additions & 40 deletions cluster-up/cluster/kind-1.19-sriov/README.md
Original file line number Diff line number Diff line change
@@ -1,74 +1,101 @@
# K8S 1.17.0 with sriov in a Kind cluster
# K8S 1.19.11 with SR-IOV in a Kind cluster

Provides a pre-deployed k8s cluster with version 1.17.0 that runs using [kind](https://github.com/kubernetes-sigs/kind) The cluster is completely ephemeral and is recreated on every cluster restart.
The KubeVirt containers are built on the local machine and are then pushed to a registry which is exposed at
Provides a pre-deployed containerized k8s cluster with version 1.19.11 that runs
using [KinD](https://github.com/kubernetes-sigs/kind)
The cluster is completely ephemeral and is recreated on every cluster restart. The KubeVirt containers are built on the
local machine and are then pushed to a registry which is exposed at
`localhost:5000`.

This version also expects to have sriov-enabled nics on the current host, and will move physical interfaces into the `kind`'s cluster worker node(s) so that they can be used through multus.
This version also expects to have SR-IOV enabled nics (SR-IOV Physical Function) on the current host, and will move
physical interfaces into the `KinD`'s cluster worker node(s) so that they can be used through multus and SR-IOV
components.

This providers also deploys [multus](https://github.com/k8snetworkplumbingwg/multus-cni)
, [sriov-cni](https://github.com/k8snetworkplumbingwg/sriov-cni)
and [sriov-device-plugin](https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin).

## Bringing the cluster up

```bash
export KUBEVIRT_PROVIDER=kind-k8s-sriov-1.17.0
export KUBEVIRT_PROVIDER=kind-1.19-sriov
export KUBEVIRT_NUM_NODES=3
make cluster-up
```

The cluster can be accessed as usual:

```bash
$ cluster-up/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
sriov-control-plane Ready master 6m14s v1.17.0
sriov-worker Ready worker 5m36s v1.17.0
NAME STATUS ROLES AGE VERSION
sriov-control-plane Ready control-plane,master 20h v1.19.11
sriov-worker Ready worker 20h v1.19.11
sriov-worker2 Ready worker 20h v1.19.11

$ cluster-up/kubectl.sh get pods -n kube-system -l app=multus
NAME READY STATUS RESTARTS AGE
kube-multus-ds-amd64-d45n4 1/1 Running 0 20h
kube-multus-ds-amd64-g26xh 1/1 Running 0 20h
kube-multus-ds-amd64-mfh7c 1/1 Running 0 20h

$ cluster-up/kubectl.sh get pods -n sriov -l app=sriov-cni
NAME READY STATUS RESTARTS AGE
kube-sriov-cni-ds-amd64-fv5cr 1/1 Running 0 20h
kube-sriov-cni-ds-amd64-q95q9 1/1 Running 0 20h

$ cluster-up/kubectl.sh get pods -n sriov -l app=sriovdp
NAME READY STATUS RESTARTS AGE
kube-sriov-device-plugin-amd64-h7h84 1/1 Running 0 20h
kube-sriov-device-plugin-amd64-xrr5z 1/1 Running 0 20h
```

## Bringing the cluster down

```bash
export KUBEVIRT_PROVIDER=kind-k8s-sriov-1.17.0
export KUBEVIRT_PROVIDER=kind-1.19-sriov
make cluster-down
```

This destroys the whole cluster.
This destroys the whole cluster, and moves the SR-IOV nics to the root network namespace.

## Setting a custom kind version

In order to use a custom kind image / kind version,
export KIND_NODE_IMAGE, KIND_VERSION, KUBECTL_PATH before running cluster-up.
For example in order to use kind 0.9.0 (which is based on k8s-1.19.1) use:
In order to use a custom kind image / kind version, export `KIND_NODE_IMAGE`, `KIND_VERSION`, `KUBECTL_PATH` before
running cluster-up. For example in order to use kind 0.9.0 (which is based on k8s-1.19.1) use:

```bash
export KIND_NODE_IMAGE="kindest/node:v1.19.1@sha256:98cf5288864662e37115e362b23e4369c8c4a408f99cbc06e58ac30ddc721600"
export KIND_VERSION="0.9.0"
export KUBECTL_PATH="/usr/bin/kubectl"
```

This allows users to test or use custom images / different kind versions before making them official.
See https://github.com/kubernetes-sigs/kind/releases for details about node images according to the kind version.

## Running multi sriov clusters locally
Kubevirtci sriov provider supports running two clusters side by side with few known limitations.
## Running multi SR-IOV clusters locally

Kubevirtci SR-IOV provider supports running two clusters side by side with few known limitations.

General considerations:

- A sriov PF must be available for each cluster.
In order to achieve that, there are two options:
1. Assign just one PF for each worker node of each cluster by using `export PF_COUNT_PER_NODE=1` (this is the default value).
2. Optional method: `export PF_BLACKLIST=<PF names>` the non used PFs, in order to prevent them from being allocated to the current cluster.
The user can list the PFs that should not be allocated to the current cluster, keeping in mind
that at least one (or 2 in case of migration), should not be listed, so they would be allocated for the current cluster.
Note: another reason to blacklist a PF, is in case its has a defect or should be kept for other operations (for example sniffing).
- Clusters should be created one by another and not in parallel (to avoid races over SRIOV PF's).
- The cluster names must be different.
This can be achieved by setting `export CLUSTER_NAME=sriov2` on the 2nd cluster.
The default `CLUSTER_NAME` is `sriov`.
The 2nd cluster registry would be exposed at `localhost:5001` automatically, once the `CLUSTER_NAME`
is set to a non default value.
- Each cluster should be created on its own git clone folder, i.e
`/root/project/kubevirtci1`
`/root/project/kubevirtci2`
In order to switch between them, change dir to that folder and set the env variables `KUBECONFIG` and `KUBEVIRT_PROVIDER`.
- A SR-IOV PF must be available for each cluster. In order to achieve that, there are two options:

1. Assign just one PF for each worker node of each cluster by using `export PF_COUNT_PER_NODE=1` (this is the default
value).
2. Optional method: `export PF_BLACKLIST=<PF names>` the non used PFs, in order to prevent them from being allocated to
the current cluster. The user can list the PFs that should not be allocated to the current cluster, keeping in mind
that at least one (or 2 in case of migration), should not be listed, so they would be allocated for the current
cluster. Note: another reason to blacklist a PF, is in case its has a defect or should be kept for other operations (
for example sniffing).

- Clusters should be created one by another and not in parallel (to avoid races over SR-IOV PF's).
- The cluster names must be different. This can be achieved by setting `export CLUSTER_NAME=sriov2` on the 2nd cluster.
The default `CLUSTER_NAME` is `sriov`. The 2nd cluster registry would be exposed at `localhost:5001` automatically,
once the `CLUSTER_NAME`
is set to a non default value.
- Each cluster should be created on its own git clone folder, i.e:
`/root/project/kubevirtci1`
`/root/project/kubevirtci2`
In order to switch between them, change dir to that folder and set the env variables `KUBECONFIG`
and `KUBEVIRT_PROVIDER`.
- In case only one PF exists, for example if running on prow which will assign only one PF per job in its own DinD,
Kubevirtci is agnostic and nothing needs to be done, since all conditions above are met.
- Upper limit of the number of clusters that can be run on the same time equals number of PFs / number of PFs per cluster,
therefore, in case there is only one PF, only one cluster can be created.
Locally the actual limit currently supported is two clusters.
Kubevirtci is agnostic and nothing needs to be done, since all conditions above are met.
- Upper limit of the number of clusters that can be run on the same time equals number of PFs / number of PFs per
cluster, therefore, in case there is only one PF, only one cluster can be created. Locally the actual limit currently
supported is two clusters.
- In order to use `make cluster-down` please make sure the right `CLUSTER_NAME` is exported.
88 changes: 88 additions & 0 deletions cluster-up/cluster/kind-1.19-sriov/conformance.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
{
"Description": "DEFAULT",
"UUID": "c3bc7d76-6ce8-4c8a-8bcb-5c7ae5fb22a3",
"Version": "v0.50.0",
"ResultsDir": "/tmp/sonobuoy",
"Resources": [
"apiservices",
"certificatesigningrequests",
"clusterrolebindings",
"clusterroles",
"componentstatuses",
"configmaps",
"controllerrevisions",
"cronjobs",
"customresourcedefinitions",
"daemonsets",
"deployments",
"endpoints",
"ingresses",
"jobs",
"leases",
"limitranges",
"mutatingwebhookconfigurations",
"namespaces",
"networkpolicies",
"nodes",
"persistentvolumeclaims",
"persistentvolumes",
"poddisruptionbudgets",
"pods",
"podlogs",
"podsecuritypolicies",
"podtemplates",
"priorityclasses",
"replicasets",
"replicationcontrollers",
"resourcequotas",
"rolebindings",
"roles",
"servergroups",
"serverversion",
"serviceaccounts",
"services",
"statefulsets",
"storageclasses",
"validatingwebhookconfigurations",
"volumeattachments"
],
"Filters": {
"Namespaces": ".*",
"LabelSelector": ""
},
"Limits": {
"PodLogs": {
"Namespaces": "",
"SonobuoyNamespace": true,
"FieldSelectors": [],
"LabelSelector": "",
"Previous": false,
"SinceSeconds": null,
"SinceTime": null,
"Timestamps": false,
"TailLines": null,
"LimitBytes": null,
"LimitSize": "",
"LimitTime": ""
}
},
"QPS": 30,
"Burst": 50,
"Server": {
"bindaddress": "0.0.0.0",
"bindport": 8080,
"advertiseaddress": "",
"timeoutseconds": 10800
},
"Plugins": null,
"PluginSearchPath": [
"./plugins.d",
"/etc/sonobuoy/plugins.d",
"~/sonobuoy/plugins.d"
],
"Namespace": "sonobuoy",
"WorkerImage": "projects.registry.vmware.com/sonobuoy/sonobuoy:v0.50.0",
"ImagePullPolicy": "IfNotPresent",
"ImagePullSecrets": "",
"ProgressUpdatesPort": "8099"
}
4 changes: 2 additions & 2 deletions cluster-up/cluster/kind-1.19-sriov/provider.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ function set_kind_params() {
}

function print_sriov_data() {
nodes=$(_kubectl get nodes -o=custom-columns=:.metadata.name | awk NF)
nodes="$(_kubectl get nodes -o=custom-columns=:.metadata.name | awk NF)"
for node in $nodes; do
if [[ ! "$node" =~ .*"control-plane".* ]]; then
echo "Node: $node"
Expand Down Expand Up @@ -53,7 +53,7 @@ function up() {
# In order to support live migration on containerized cluster we need to workaround
# Libvirt uuid check for source and target nodes.
# To do that we create PodPreset that mounts fake random product_uuid to virt-launcher pods,
# and kubevirt SRIOV tests namespace for the PodPrest beforhand.
# and kubevirt SRIOV tests namespace for the PodPreset beforehand.
podpreset::expose_unique_product_uuid_per_node "$CLUSTER_NAME" "$SRIOV_TESTS_NS"

print_sriov_data
Expand Down
10 changes: 10 additions & 0 deletions cluster-up/cluster/kind-1.22-sriov/OWNERS
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
filters:
".*":
reviewers:
- qinqon
- oshoval
- phoracek
- ormergi
approvers:
- qinqon
- phoracek
101 changes: 101 additions & 0 deletions cluster-up/cluster/kind-1.22-sriov/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# K8S 1.22.2 with SR-IOV in a Kind cluster

Provides a pre-deployed containerized k8s cluster with version 1.22.2 that runs
using [KinD](https://github.com/kubernetes-sigs/kind)
The cluster is completely ephemeral and is recreated on every cluster restart. The KubeVirt containers are built on the
local machine and are then pushed to a registry which is exposed at
`localhost:5000`.

This version also expects to have SR-IOV enabled nics (SR-IOV Physical Function) on the current host, and will move
physical interfaces into the `KinD`'s cluster worker node(s) so that they can be used through multus and SR-IOV
components.

This providers also deploys [multus](https://github.com/k8snetworkplumbingwg/multus-cni)
, [sriov-cni](https://github.com/k8snetworkplumbingwg/sriov-cni)
and [sriov-device-plugin](https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin).

## Bringing the cluster up

```bash
export KUBEVIRT_PROVIDER=kind-1.22-sriov
export KUBEVIRT_NUM_NODES=3
make cluster-up

$ cluster-up/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
sriov-control-plane Ready control-plane,master 20h v1.22.2
sriov-worker Ready worker 20h v1.22.2
sriov-worker2 Ready worker 20h v1.22.2

$ cluster-up/kubectl.sh get pods -n kube-system -l app=multus
NAME READY STATUS RESTARTS AGE
kube-multus-ds-amd64-d45n4 1/1 Running 0 20h
kube-multus-ds-amd64-g26xh 1/1 Running 0 20h
kube-multus-ds-amd64-mfh7c 1/1 Running 0 20h

$ cluster-up/kubectl.sh get pods -n sriov -l app=sriov-cni
NAME READY STATUS RESTARTS AGE
kube-sriov-cni-ds-amd64-fv5cr 1/1 Running 0 20h
kube-sriov-cni-ds-amd64-q95q9 1/1 Running 0 20h

$ cluster-up/kubectl.sh get pods -n sriov -l app=sriovdp
NAME READY STATUS RESTARTS AGE
kube-sriov-device-plugin-amd64-h7h84 1/1 Running 0 20h
kube-sriov-device-plugin-amd64-xrr5z 1/1 Running 0 20h
```

## Bringing the cluster down

```bash
export KUBEVIRT_PROVIDER=kind-1.22-sriov
make cluster-down
```

This destroys the whole cluster, and moves the SR-IOV nics to the root network namespace.

## Setting a custom kind version

In order to use a custom kind image / kind version, export `KIND_NODE_IMAGE`, `KIND_VERSION`, `KUBECTL_PATH` before
running cluster-up. For example in order to use kind 0.9.0 (which is based on k8s-1.19.1) use:

```bash
export KIND_NODE_IMAGE="kindest/node:v1.19.1@sha256:98cf5288864662e37115e362b23e4369c8c4a408f99cbc06e58ac30ddc721600"
export KIND_VERSION="0.9.0"
export KUBECTL_PATH="/usr/bin/kubectl"
```

This allows users to test or use custom images / different kind versions before making them official.
See https://github.com/kubernetes-sigs/kind/releases for details about node images according to the kind version.

## Running multi SR-IOV clusters locally

Kubevirtci SR-IOV provider supports running two clusters side by side with few known limitations.

General considerations:

- A SR-IOV PF must be available for each cluster. In order to achieve that, there are two options:

1. Assign just one PF for each worker node of each cluster by using `export PF_COUNT_PER_NODE=1` (this is the default
value).
2. Optional method: `export PF_BLACKLIST=<PF names>` the non used PFs, in order to prevent them from being allocated to
the current cluster. The user can list the PFs that should not be allocated to the current cluster, keeping in mind
that at least one (or 2 in case of migration), should not be listed, so they would be allocated for the current
cluster. Note: another reason to blacklist a PF, is in case its has a defect or should be kept for other operations (
for example sniffing).

- Clusters should be created one by another and not in parallel (to avoid races over SR-IOV PF's).
- The cluster names must be different. This can be achieved by setting `export CLUSTER_NAME=sriov2` on the 2nd cluster.
The default `CLUSTER_NAME` is `sriov`. The 2nd cluster registry would be exposed at `localhost:5001` automatically,
once the `CLUSTER_NAME`
is set to a non default value.
- Each cluster should be created on its own git clone folder, i.e:
`/root/project/kubevirtci1`
`/root/project/kubevirtci2`
In order to switch between them, change dir to that folder and set the env variables `KUBECONFIG`
and `KUBEVIRT_PROVIDER`.
- In case only one PF exists, for example if running on prow which will assign only one PF per job in its own DinD,
Kubevirtci is agnostic and nothing needs to be done, since all conditions above are met.
- Upper limit of the number of clusters that can be run on the same time equals number of PFs / number of PFs per
cluster, therefore, in case there is only one PF, only one cluster can be created. Locally the actual limit currently
supported is two clusters.
- In order to use `make cluster-down` please make sure the right `CLUSTER_NAME` is exported.
Loading

0 comments on commit c6629e7

Please sign in to comment.