Skip to content

Kubernetes Based System Test Framework

Sandeep edited this page Feb 22, 2019 · 6 revisions

Introduction

The K8s (Kubernetes) based system framework allows the user to write system tests against Pravega. The tests are run using a customized Junit test runner which ensures Pravega is deployed with the test specific configuration (specified using @Enviroment annotated method) on a K8s cluster along with the Pravega-operator and Zookeeper-operator. The system tests running against the deployed Pravega instance also run inside a pod on the K8s cluster, thereby simplifying running tests on remote clusters. This framework also allows users to dynamically scale the components (either Pravega components or Zookeeper instances) during the course of execution of the tests. This is specially helpful to verify and validate failover scenarios of the different components/services.

Prerequisites

There are 2 prerequisites to run system tests on K8s.

  1. Ensure that you are able to access the K8s cluster using kubectl. The K8s cluster can be a MiniKube K8s cluster or a GCP cluster or a PKS based cluster. Note: kubectl version should be >= v1.13.0

  2. Ensure tier2 is configured and a PersitentVolumeClaim with the name pravega-tier2 is configured. (refer : https://github.com/pravega/pravega-operator on the steps to configure tier2 pvc.)

Commands

The following commands can be used to run system tests with Pravega.

a. Run a specific system test against the latest version of Pravega

gradle --info startK8SystemTests -DimageVersion=latest 
       --tests io.pravega.test.system.PravegaTest

This command will deploy Pravega and related components like Zookeeper, Bookkeeper with the latest tag from dockerhub and run PravegaTest against it.

b. Run all the existing system tests against a specific version of Pravega residing inside a private docker repository.

gradle --info startK8SystemTests -DdockerRegistryUrl=private-docker-repo:8116 
       -DimagePrefix=customfix -DimageVersion=0.5.0-2066.6ba29eb

This command will deploy Pravega and related components from the docker repo private-docker-repo with version 0.5.0-2066.6ba29eb. The docker Pravega docker images will have the tag customfix/pravega:0.5.0-2066.6ba29eb. If imagePrefix system property is not mentioned then pravega is used as the default prefix.

FAQ

  1. Where can we find the test logs after the execution of the test?

The test logs can be found under pravega/test/system/build/test-results folder. The framework starts the download of the logs as and when the tests have started execution. The users can tail these files to get the latest logs of the test.

  1. How can we download the Pravega component logs?

Pravega log collection utility Pravega-k8-log-collector can be used to download it from the cluster.

Minikube Setup

Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day. This section documents the steps involved in setting up Minikube and configuring a NFS based tier2.

Installation

The detailed steps involved to install Minikube are documented @ https://kubernetes.io/docs/tasks/tools/install-minikube/. Minikube which is a single node K8s cluster runs as a single virtual machine on hypervisors like VirtualBox or VMWare Fusion.

The easiest way to install Minikube on a Windows development machine is to use Chocolatey (https://chocolatey.org/). Steps to install Chocolatey are @ https://chocolatey.org/install. (Remember to use an administrative shell). Once Chocolatey is installed run choco install minikube kubernetes-cli. A windows based installer is also present at https://github.com/kubernetes/minikube/releases/latest .

Accessing Minikube

Once Minikube is started using minikube start create a kube config @ $HOME/.kube/config. Note: this config is usually created when Minikube is setup. The key and crt files are also generated and placed in the minikube folder at the home directory. (windows path: C:\Users\user1.minikube) A sample kubelet config is shown below.

apiVersion: v1
clusters:
- cluster:   
    insecure-skip-tls-verify: true
    server: https://YOUR_SERVER:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /Users/user1/work/minikube/client.crt
    client-key: /Users/user1/work/minikube/client.key

If you are accessing Minikube from a different VM you can configure kubectl config where the ip of the server is that of the Minikube VM. Note: the command kubectl config use-context minikube can be used to switch the context between a PKS cluster and Minikube.

Configure Minikube and Setup up tier2

  • Now that we have Minikube running and we are able to access it using kubectl. The next step is to ensure the Memory and CPU resources provisioned to the Minikube VM are sufficient to run the operators (Zookeeper operator and Pravega Operator), Bookkeeper (3 bookies) and Pravega containers. This can be done by changing the properties of the Minikube vm on the VirtualBox/ VMware console directly. The other option here is to modify C:\Users\user1\.minikube\machines\minikube\config.json. ref: https://github.com/kubernetes/minikube/issues/604#issuecomment-309296149 This route can be used to modify Minikube vm configuration. Note: minikube ssh command can be used to login into the Minikube VM.

  • After ensuring the Minikube VM has required amount of Memory and CPU configured the next step is create a Tier2 using helm (https://helm.sh/ installation steps : https://github.com/helm/helm ) The steps to create NFS based tier2 are documented at https://github.com/pravega/pravega-operator Listing the steps here.

helm init –upgrade

kubectl create serviceaccount --namespace kube-system tiller

kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

helm install stable/nfs-server-provisioner

kubectl get storageclass // this command should list nfs.

// Command to create pvc.
cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pravega-tier2
spec:
  storageClassName: "nfs"
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
EOF

// command to check if pvc is created
kubectl get pvc

Once the pvc is created the user can run system tests on minkube by following the steps mentioned @ https://github.com/pravega/pravega/wiki/Kubernetes-Based-System-Test-Framework#commands

Clone this wiki locally