Skip to content

Commit

Permalink
Add missing OCP wrapper documentation (kube-burner#297)
Browse files Browse the repository at this point in the history
* Add missing OCP wrapper documentation

Fixes: kube-burner#247

The workloads available from the OCP wrapper weren't documented. Also
adding more information about the `clusterMetadata` document.

Signed-off-by: Raul Sevilla <[email protected]>

* Standarize import namings

Signed-off-by: Raul Sevilla <[email protected]>

---------

Signed-off-by: Raul Sevilla <[email protected]>
  • Loading branch information
rsevilla87 authored Apr 25, 2023
1 parent a05fcfa commit 4a79917
Show file tree
Hide file tree
Showing 4 changed files with 172 additions and 31 deletions.
159 changes: 150 additions & 9 deletions docs/ocp.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,16 @@ The kube-burner binary brings a very opinionated OpenShift wrapper designed to s
This wrapper is hosted under the `kube-burner ocp` subcommand that currently looks like:

```console
$ kube-burner ocp -h
This subcommand is meant to be used against OpenShift clusters and serve as a shortcut to trigger well-known workloads

Usage:
kube-burner ocp [command]

Available Commands:
cluster-density Runs cluster-density workload
cluster-density-v2 Runs cluster-density-v2 workload
cluster-density-ms Runs cluster-density-ms workload
cluster-density-v2 Runs cluster-density-v2 workload
node-density Runs node-density workload
node-density-cni Runs node-density-cni workload
node-density-heavy Runs node-density-heavy workload
Expand All @@ -24,19 +25,18 @@ Flags:
--es-server string Elastic Search endpoint
--extract Extract workload in the current directory
--gc Garbage collect created namespaces (default true)
--local-indexing Local indexing
-h, --help help for ocp
--local-indexing Enable local indexing
--metrics-endpoint string YAML file with a list of metric endpoints
--qps int QPS (default 20)
--timeout duration Benchmark timeout (default 2h0m0s)
--timeout duration Benchmark timeout (default 3h0m0s)
--user-metadata string User provided metadata file, in YAML format
--uuid string Benchmark UUID (default "ff60bd1c-df27-4713-be3e-6b92acdd4d72")
--uuid string Benchmark UUID (default "d18989c4-4f8a-4a14-b711-9afae69a9140")

Global Flags:
--log-level string Allowed values: trace, debug, info, warn, error, fatal (default "info")
--log-level string Allowed values: debug, info, warn, error, fatal (default "info")

Use "kube-burner ocp [command] --help" for more information about a command.

```

## Usage
Expand All @@ -46,16 +46,15 @@ In order to trigger one of the supported workloads using this subcommand you hav
Running node-density with 100 pods per node

```console
$ kube-burner ocp node-density --pods-per-node=100
kube-burner ocp node-density --pods-per-node=100
```

Running cluster-density with multiple endpoints support

```console
$ kube-burner ocp cluster-density --iterations=1 --churn-duration=2m0s --es-index kube-burner --es-server https://www.esurl.com:443 --metrics-endpoint metrics-endpoints.yaml
kube-burner ocp cluster-density --iterations=1 --churn-duration=2m0s --es-index kube-burner --es-server https://www.esurl.com:443 --metrics-endpoint metrics-endpoints.yaml
```


With the command above, the wrapper will calculate the required number of pods to deploy across all worker nodes of the cluster.

This wrapper provides the following benefits among others:
Expand All @@ -65,6 +64,119 @@ This wrapper provides the following benefits among others:
- Prevents modifying configuration files to tweak some of the parameters of the workloads
- Discovers the Prometheus URL and authentication token, so the user does not have to perform those operations before using them.

## Available workloads

### cluster-density, cluster-density-v2 and cluster-density-ms

Control-plane density focused test that creates deployments, builds, secrets, services and more across in the cluster. Each iteration of these workloads create a new namespace, they support the following flags.

```shell
$ kube-burner ocp cluster-density -h
Runs cluster-density workload

Usage:
kube-burner ocp cluster-density [flags]

Flags:
--churn Enable churning (default true)
--churn-delay duration Time to wait between each churn (default 2m0s)
--churn-duration duration Churn duration (default 1h0m0s)
--churn-percent int Percentage of job iterations that kube-burner will churn each round (default 10)
--iterations int cluster-density iterations
```

---

Each iteration of **cluster-density** creates the following objects in each of the namespaces/iterations:

- 1 imagestream
- 1 build. The OCP internal container registry must be set-up previously since the resulting container image will be pushed there.
- 5 deployments with two pod replicas (pause) mounting 4 secrets, 4 configmaps and 1 downwardAPI volume each
- 5 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the previous deployments
- 1 edge route pointing to the to first service
- 10 secrets containing 2048 character random string
- 10 configMaps containing a 2048 character random string

---

Each iteration of **cluster-density-v2** creates the following objects in each of the namespaces/iterations:

- 1 imagestream
- 1 build. The OCP internal container registry must be set-up previously since the resulting container image will be pushed there.
- 3 deployments with two pod 2 replicas (nginx) mounting 4 secrets, 4 configmaps and 1 downwardAPI volume each
- 2 deployments with two pod 2 replicas (curl) mounting 4 secrets, 4 configmaps and 1 downwardAPI volume each. These pods have configured a readinessProbe that makes a request to one of the services and one of the routes created by this workload every 10 seconds.
- 5 services, each one pointing to the TCP/8080 port of one of the nginx deployments.
- 2 edge route pointing to the to first and second service respectively
- 10 secrets containing 2048 character random string
- 10 configMaps containing a 2048 character random string
- 3 network policies
- 1 deny-all traffic
- 1 allow traffic from client/nginx pods to server/nginx pods
- 1 allow traffic from openshift-ingress namespace (where routers are deployed by default) to the namespace

---

Each iteration of **cluster-density-ms** creates the following objects in each of the namespaces/iterations:

- 1 imagestream
- 4 deployments with two pod replicas (pause) mounting 4 secrets, 4 configmaps and 1 downwardAPI volume each
- 2 services, each one pointing to the TCP/8080 and TCP/8443 ports of the first and second deployment respectively.
- 1 edge route pointing to the to first service
- 20 secrets containing 2048 character random string
- 10 configMaps containing a 2048 character random string

## node-density, node-density-cni & node-density-heavy

The **node-density** workload is meant to fill with pause pods all the worker nodes from the cluster. It can be customized with the following flags.

```shell
$ kube-burner ocp node-density -h
Runs node-density workload

Usage:
kube-burner ocp node-density [flags]

Flags:
--container-image string Container image (default "gcr.io/google_containers/pause:3.1")
-h, --help help for node-density
--pod-ready-threshold duration Pod ready timeout threshold (default 5s)
--pods-per-node int Pods per node (default 245)
```

---

The **node-density-cni** workload does something similar with the difference that it creates two deployments client/curl and server/nxing, and 1 service backed by the previous server deployment. The client application has configured an startupProbe that makes requests to the service every second with a timeout of 600s.

$ kube-burner ocp node-density-cni -h
Runs node-density-cni workload

```shell
Usage:
kube-burner ocp node-density-cni [flags]

Flags:
-h, --help help for node-density-cni
--pods-per-node int Pods per node (default 245)
```

---

The **node-density-heavy** workload creates a single namespace with two deployments, a postgresql database and a simple client that performs periodic queries on the previous database and a service that is used by the client to reach the database.

```shell
$ kube-burner ocp node-density-heavy -h
Runs node-density-heavy workload

Usage:
kube-burner ocp node-density-heavy [flags]

Flags:
-h, --help help for node-density-heavy
--pod-ready-threshold duration Pod ready timeout threshold (default 1h0m0s)
--pods-per-node int Pods per node (default 245)
--probes-period int Perf app readiness/livenes probes period in seconds (default 10)
```

## Customizing workloads

It's possible to customize the workload configuration before running the workload by extracting, updating and finally running it:
Expand All @@ -76,3 +188,32 @@ alerts.yml metrics.yml node-density.yml pod.yml
$ vi node-density.yml # Perform modifications accordingly
$ kube-burner ocp node-density --pods-per-node=100 # Run workload
```

### Cluster metadata

As soon as a benchmark finishes, kube-burner will index the cluster metadata in the configured indexer. At the time of writing this document is based on the following golang struct:

```golang
type clusterMetadata struct {
MetricName string `json:"metricName,omitempty"`
UUID string `json:"uuid"`
Platform string `json:"platform"`
OCPVersion string `json:"ocpVersion"`
K8SVersion string `json:"k8sVersion"`
MasterNodesType string `json:"masterNodesType"`
WorkerNodesType string `json:"workerNodesType"`
InfraNodesType string `json:"infraNodesType"`
WorkerNodesCount int `json:"workerNodesCount"`
InfraNodesCount int `json:"infraNodesCount"`
TotalNodes int `json:"totalNodes"`
SDNType string `json:"sdnType"`
Benchmark string `json:"benchmark"`
Timestamp time.Time `json:"timestamp"`
EndDate time.Time `json:"endDate"`
ClusterName string `json:"clusterName"`
Passed bool `json:"passed"`
Metadata map[string]interface{} `json:"metadata,omitempty"`
}
```

Where metricName is hardcoded to `clusterMetadata`
8 changes: 4 additions & 4 deletions pkg/burner/namespaces.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,15 @@ import (
"time"

log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
)

func createNamespace(clientset *kubernetes.Clientset, namespaceName string, nsLabels map[string]string) error {
ns := v1.Namespace{
ns := corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{Name: namespaceName, Labels: nsLabels},
}

Expand All @@ -40,8 +40,8 @@ func createNamespace(clientset *kubernetes.Clientset, namespaceName string, nsLa
if errors.IsAlreadyExists(err) {
log.Infof("Namespace %s already exists", ns.Name)
nsSpec, _ := clientset.CoreV1().Namespaces().Get(context.TODO(), namespaceName, metav1.GetOptions{})
if nsSpec.Status.Phase == v1.NamespaceTerminating {
log.Warnf("Namespace %s is in %v state, retrying", namespaceName, v1.NamespaceTerminating)
if nsSpec.Status.Phase == corev1.NamespaceTerminating {
log.Warnf("Namespace %s is in %v state, retrying", namespaceName, corev1.NamespaceTerminating)
return false, nil
}
return true, nil
Expand Down
14 changes: 7 additions & 7 deletions pkg/burner/pre_load.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import (
log "github.com/sirupsen/logrus"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
)
Expand Down Expand Up @@ -56,7 +56,7 @@ func preLoadImages(job Executor) error {
// 5 minutes should be more than enough to cleanup this namespace
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
CleanupNamespaces(ctx, v1.ListOptions{LabelSelector: "kube-burner-preload=true"}, true)
CleanupNamespaces(ctx, metav1.ListOptions{LabelSelector: "kube-burner-preload=true"}, true)
return nil
}

Expand Down Expand Up @@ -107,19 +107,19 @@ func createDSs(imageList []string, namespaceLabels map[string]string) error {
Image: image,
}
ds := appsv1.DaemonSet{
TypeMeta: v1.TypeMeta{
TypeMeta: metav1.TypeMeta{
Kind: "DaemonSet",
APIVersion: "apps/v1",
},
ObjectMeta: v1.ObjectMeta{
ObjectMeta: metav1.ObjectMeta{
GenerateName: dsName,
},
Spec: appsv1.DaemonSetSpec{
Selector: &v1.LabelSelector{
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{"app": dsName},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: v1.ObjectMeta{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{"app": dsName},
},
Spec: corev1.PodSpec{
Expand All @@ -136,7 +136,7 @@ func createDSs(imageList []string, namespaceLabels map[string]string) error {
},
}
log.Infof("Pre-load: Creating DaemonSet using image %s in namespace %s", image, preLoadNs)
_, err := ClientSet.AppsV1().DaemonSets(preLoadNs).Create(context.TODO(), &ds, v1.CreateOptions{})
_, err := ClientSet.AppsV1().DaemonSets(preLoadNs).Create(context.TODO(), &ds, metav1.CreateOptions{})
if err != nil {
return err
}
Expand Down
22 changes: 11 additions & 11 deletions pkg/discovery/discovery.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ import (
log "github.com/sirupsen/logrus"

authenticationv1 "k8s.io/api/authentication/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
Expand Down Expand Up @@ -80,7 +80,7 @@ func getPrometheusURL(dynamicClient dynamic.Interface) (string, error) {
Group: routeGroup,
Version: routeVersion,
Resource: routeResource,
}).Namespace("openshift-monitoring").Get(context.TODO(), "prometheus-k8s", v1.GetOptions{})
}).Namespace("openshift-monitoring").Get(context.TODO(), "prometheus-k8s", metav1.GetOptions{})
if err != nil {
return "", err
}
Expand All @@ -103,7 +103,7 @@ func getBearerToken(clientset *kubernetes.Clientset) (string, error) {
ExpirationSeconds: pointer.Int64Ptr(int64(tokenExpiration.Seconds())),
},
}
response, err := clientset.CoreV1().ServiceAccounts("openshift-monitoring").CreateToken(context.TODO(), "prometheus-k8s", &request, v1.CreateOptions{})
response, err := clientset.CoreV1().ServiceAccounts("openshift-monitoring").CreateToken(context.TODO(), "prometheus-k8s", &request, metav1.CreateOptions{})
if err != nil {
return "", err
}
Expand All @@ -113,20 +113,20 @@ func getBearerToken(clientset *kubernetes.Clientset) (string, error) {

// GetWorkerNodeCount returns the number of worker nodes
func (da *Agent) GetWorkerNodeCount() (int, error) {
nodeList, err := da.clientSet.CoreV1().Nodes().List(context.TODO(), v1.ListOptions{LabelSelector: workerNodeSelector})
nodeList, err := da.clientSet.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{LabelSelector: workerNodeSelector})
log.Infof("Listed nodes after using selector %s: %d", workerNodeSelector, len(nodeList.Items))
return len(nodeList.Items), err
}

// GetCurrentPodCount returns the number of current running pods across all worker nodes
func (da *Agent) GetCurrentPodCount() (int, error) {
var podCount int
nodeList, err := da.clientSet.CoreV1().Nodes().List(context.TODO(), v1.ListOptions{LabelSelector: workerNodeSelector})
nodeList, err := da.clientSet.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{LabelSelector: workerNodeSelector})
if err != nil {
return podCount, err
}
for _, node := range nodeList.Items {
podList, err := da.clientSet.CoreV1().Pods(v1.NamespaceAll).List(context.TODO(), v1.ListOptions{FieldSelector: "status.phase=Running,spec.nodeName=" + node.Name})
podList, err := da.clientSet.CoreV1().Pods(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{FieldSelector: "status.phase=Running,spec.nodeName=" + node.Name})
if err != nil {
return podCount, err
}
Expand All @@ -143,7 +143,7 @@ func (da *Agent) GetInfraDetails() (InfraObj, error) {
Group: "config.openshift.io",
Version: "v1",
Resource: "infrastructures",
}).Get(context.TODO(), "cluster", v1.GetOptions{})
}).Get(context.TODO(), "cluster", metav1.GetOptions{})
if err != nil {
return infraJSON, err
}
Expand All @@ -166,7 +166,7 @@ func (da *Agent) GetVersionInfo() (VersionObj, error) {
Group: "config.openshift.io",
Version: "v1",
Resource: "clusterversions",
}).Get(context.TODO(), "version", v1.GetOptions{})
}).Get(context.TODO(), "version", metav1.GetOptions{})
if err != nil {
return versionInfo, err
}
Expand All @@ -185,7 +185,7 @@ func (da *Agent) GetVersionInfo() (VersionObj, error) {
// GetNodesInfo returns node information
func (da *Agent) GetNodesInfo() (NodeInfo, error) {
var nodeInfoData NodeInfo
nodes, err := da.clientSet.CoreV1().Nodes().List(context.TODO(), v1.ListOptions{})
nodes, err := da.clientSet.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
if err != nil {
return nodeInfoData, err
}
Expand Down Expand Up @@ -216,7 +216,7 @@ func (da *Agent) GetSDNInfo() (string, error) {
Group: "config.openshift.io",
Version: "v1",
Resource: "networks",
}).Get(context.TODO(), "cluster", v1.GetOptions{})
}).Get(context.TODO(), "cluster", metav1.GetOptions{})
if err != nil {
return "", err
}
Expand All @@ -233,7 +233,7 @@ func (da *Agent) GetDefaultIngressDomain() (string, error) {
Group: "operator.openshift.io",
Version: "v1",
Resource: "ingresscontrollers",
}).Namespace("openshift-ingress-operator").Get(context.TODO(), "default", v1.GetOptions{})
}).Namespace("openshift-ingress-operator").Get(context.TODO(), "default", metav1.GetOptions{})
if err != nil {
return "", err
}
Expand Down

0 comments on commit 4a79917

Please sign in to comment.