From 66f1d52dabeb2ec8f21aad84c3a1548d0a2e45ea Mon Sep 17 00:00:00 2001 From: kmova Date: Sat, 17 Jul 2021 08:43:48 +0530 Subject: [PATCH 01/10] chore(release): update references to 2.11 Signed-off-by: kmova --- docs/d-cStorguide.md | 12 +++++----- docs/jivaguide.md | 12 +++++----- docs/releases.md | 54 ++++++++++++++++++++++++++++++++++++++++++- docs/ugcstor-csi.md | 11 +++------ docs/ugcstor.md | 25 ++++++++++++-------- website/sidebars.json | 2 +- website/versions.json | 3 ++- 7 files changed, 86 insertions(+), 33 deletions(-) diff --git a/docs/d-cStorguide.md b/docs/d-cStorguide.md index 757d9b14e..d841fda26 100644 --- a/docs/d-cStorguide.md +++ b/docs/d-cStorguide.md @@ -1259,11 +1259,11 @@ Below table lists the storage policies supported by cStor. These policies should | cStor Storage Policy | Mandatory | Default | Purpose | | ------------------------------------------------------------ | --------- | --------------------------------------- | ------------------------------------------------------------ | | [ReplicaCount](#Replica-Count-Policy) | No | 3 | Defines the number of cStor volume replicas | -| [VolumeControllerImage](#Volume-Controller-Image-Policy) | | openebs/cstor-volume-mgmt:2.10.0 | Dedicated side car for command management like taking snapshots etc. Can be used to apply a specific issue or feature for the workload | -| [VolumeTargetImage](#Volume-Target-Image-Policy) | | openebs/cstor-istgt:2.10.0 | iSCSI protocol stack dedicated to the workload. Can be used to apply a specific issue or feature for the workload | +| [VolumeControllerImage](#Volume-Controller-Image-Policy) | | openebs/cstor-volume-mgmt:2.11.0 | Dedicated side car for command management like taking snapshots etc. Can be used to apply a specific issue or feature for the workload | +| [VolumeTargetImage](#Volume-Target-Image-Policy) | | openebs/cstor-istgt:2.11.0 | iSCSI protocol stack dedicated to the workload. Can be used to apply a specific issue or feature for the workload | | [StoragePoolClaim](#Storage-Pool-Claim-Policy) | Yes | N/A (a valid pool must be provided) | The cStorPool on which the volume replicas should be provisioned | | [VolumeMonitor](#Volume-Monitor-Policy) | | ON | When ON, a volume exporter sidecar is launched to export Prometheus metrics. | -| [VolumeMonitorImage](#Volume-Monitoring-Image-Policy) | | openebs/m-exporter:2.10.0 | Used when VolumeMonitor is ON. A dedicated metrics exporter to the workload. Can be used to apply a specific issue or feature for the workload | +| [VolumeMonitorImage](#Volume-Monitoring-Image-Policy) | | openebs/m-exporter:2.11.0 | Used when VolumeMonitor is ON. A dedicated metrics exporter to the workload. Can be used to apply a specific issue or feature for the workload | | [FSType](#Volume-File-System-Type-Policy) | | ext4 | Specifies the filesystem that the volume should be formatted with. Other values are `xfs` | | [TargetNodeSelector](#Target-NodeSelector-Policy) | | Decided by Kubernetes scheduler | Specify the label in `key: value` format to notify Kubernetes scheduler to schedule cStor target pod on the nodes that match label | | [TargetResourceLimits](#Target-ResourceLimits-Policy) | | Decided by Kubernetes scheduler | CPU and Memory limits to cStor target pod | @@ -1327,7 +1327,7 @@ metadata: annotations: cas.openebs.io/config: | - name: VolumeControllerImage - value: openebs/cstor-volume-mgmt:2.10.0 + value: openebs/cstor-volume-mgmt:2.11.0 - name: StoragePoolClaim value: "cstor-disk-pool" openebs.io/cas-type: cstor @@ -1346,7 +1346,7 @@ metadata: annotations: cas.openebs.io/config: | - name: VolumeTargetImage - value:openebs/cstor-istgt:2.10.0 + value:openebs/cstor-istgt:2.11.0 - name: StoragePoolClaim value: "cstor-disk-pool" openebs.io/cas-type: cstor @@ -1384,7 +1384,7 @@ metadata: annotations: cas.openebs.io/config: | - name: VolumeMonitorImage - value: openebs/m-exporter:2.10.0 + value: openebs/m-exporter:2.11.0 - name: StoragePoolClaim value: "cstor-sparse-pool" openebs.io/cas-type: cstor diff --git a/docs/jivaguide.md b/docs/jivaguide.md index 618730dd0..14713ee87 100644 --- a/docs/jivaguide.md +++ b/docs/jivaguide.md @@ -577,11 +577,11 @@ Below table lists the storage policies supported by Jiva. These policies can be | JIVA STORAGE POLICY | MANDATORY | DEFAULT | PURPOSE | | ------------------------------------------------------------ | --------- | --------------------------------- | ------------------------------------------------------------ | | [ReplicaCount](#Replica-Count-Policy) | No | 3 | Defines the number of Jiva volume replicas | -| [Replica Image](#Replica-Image-Policy) | | openebs/m-apiserver:2.10.0 | To use particular Jiva replica image | -| [ControllerImage](#Controller-Image-Policy) | | openebs/jiva:2.10.0 | To use particular Jiva Controller Image | +| [Replica Image](#Replica-Image-Policy) | | openebs/m-apiserver:2.11.0 | To use particular Jiva replica image | +| [ControllerImage](#Controller-Image-Policy) | | openebs/jiva:2.11.0 | To use particular Jiva Controller Image | | [StoragePool](#Storage-Pool-Policy) | Yes | default | A storage pool provides a persistent path for an OpenEBS volume. It can be a directory on host OS or externally mounted disk. | | [VolumeMonitor](#Volume-Monitor-Policy) | | ON | When ON, a volume exporter sidecar is launched to export Prometheus metrics. | -| [VolumeMonitorImage](#Volume-Monitoring-Image-Policy) | | openebs/m-exporter:2.10.0 | Used when VolumeMonitor is ON. A dedicated metrics exporter to the workload. Can be used to apply a specific issue or feature for the workload | +| [VolumeMonitorImage](#Volume-Monitoring-Image-Policy) | | openebs/m-exporter:2.11.0 | Used when VolumeMonitor is ON. A dedicated metrics exporter to the workload. Can be used to apply a specific issue or feature for the workload | | [Volume FSType](#Volume-File-System-Type-Policy) | | ext4 | Specifies the filesystem that the volume should be formatted with. Other values are `xfs` | | [Volume Space Reclaim](#Volume-Space-Reclaim-Policy) | | false | It will specify whether data need to be retained post PVC deletion. | | [TargetNodeSelector](#Targe-NodeSelector-Policy) | | Decided by Kubernetes scheduler | Specify the label in `key: value` format to notify Kubernetes scheduler to schedule Jiva target pod on the nodes that match label. | @@ -627,7 +627,7 @@ metadata: openebs.io/cas-type: jiva cas.openebs.io/config: | - name: ReplicaImage - value: openebs/m-apiserver:2.10.0 + value: openebs/m-apiserver:2.11.0 provisioner: openebs.io/provisioner-iscsi ``` @@ -644,7 +644,7 @@ metadata: openebs.io/cas-type: jiva cas.openebs.io/config: | - name: ControllerImage - value: openebs/jiva:2.10.0 + value: openebs/jiva:2.11.0 provisioner: openebs.io/provisioner-iscsi ``` @@ -733,7 +733,7 @@ metadata: openebs.io/cas-type: jiva cas.openebs.io/config: | - name: VolumeMonitorImage - value: openebs/m-exporter:2.10.0 + value: openebs/m-exporter:2.11.0 provisioner: openebs.io/provisioner-iscsi ``` diff --git a/docs/releases.md b/docs/releases.md index 58eb37933..d87a5c92c 100644 --- a/docs/releases.md +++ b/docs/releases.md @@ -6,9 +6,61 @@ sidebar_label: Releases ------ -## 2.10.0 - Jun 15 2021 +## 2.11.0 - Jul 15 2021
Latest Release
(Recommended)
+OpenEBS v2.11 is another maintenance release before moving towards 3.0 primarily focusing on enhancing the E2E tests, build, release workflows, and documentation. This release also includes enhancements to improve the user experience and fixes for bugs reported by users and E2E tools. There has been some significant progress made on the alpha features as well. + +--- + +**Deprecation Notice**: Jiva and cStor `out-of-tree external` provisioners will be deprecated by Dec 2021 in favor of the corresponding CSI Drivers. The out of tree provisioners for Jiva and cStor will stop working from Kubernetes 1.22 and forward as the version of the custom resources used by those provisioners will be deprecated. We strongly recommend you plan for migrating your volumes to [cStor CSI](https://github.com/openebs/upgrade/blob/master/docs/migration.md) or [Jiva CSI](https://github.com/openebs/upgrade/blob/master/docs/migration.md#migrating-jiva-external-provisioned-volumes-to-jiva-csi-volumes-experimental) as early as possible. + +If you have any questions or need help with the migration please reach out to us on our Kubernetes Community slack channel [#openebs](https://kubernetes.slack.com/archives/CUAKPFU78). + +--- + +### Key Improvements +- [kubectl plugin for openebs](https://github.com/openebs/openebsctl/blob/develop/README.md) has been enhanced to provide additional information about OpenEBS storage components like: + - Block devices managed by OpenEBS (`kubectl openebs get bd`) + - Jiva Volumes + - LVM Local PV Volumes + - ZFS Local PV Volumes + +### Key Bug Fixes + +### Backward Incompatibilities + +- Kubernetes 1.18 or higher release is recommended as this release uses features of Kubernetes that will not be compatible with older Kubernetes releases. +- OpenEBS has deprecated arch-specific container images in favor of multi-arch container images. For example, images like `cstor-pool-arm64:x.y.x` should be replaced with corresponding multi-arch image `cstor-pool:x.y.x`. + +### Component versions + +OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v2.11.0 are as follows: + +- CSI Drivers + - [Mayastor](https://docs.openebs.io/docs/next/ugmayastor.html) 0.8.1 (beta) + - [cStor](https://github.com/openebs/cstor-operators) 2.11.0 (beta) + - [Jiva](https://github.com/openebs/jiva-operator) 2.11.0 (beta) + - [Local PV ZFS](https://github.com/openebs/zfs-localpv) 1.9.0 (stable) + - [Local PV LVM](https://github.com/openebs/lvm-localpv) 0.7.0 (beta) + - [Local PV Rawfile](https://github.com/openebs/rawfile-localpv) 0.5.0 (beta) + - [Local PV Partitions](https://github.com/openebs/device-localpv) 0.4.0 (alpha) +- Out-of-tree provisioners + - [Jiva](https://docs.openebs.io/docs/next/jiva.html) 2.11.0 (stable) + - [Local PV hostpath](https://docs.openebs.io/docs/next/uglocalpv-hostpath.html) 2.11.0 (stable) + - [Local PV device](https://docs.openebs.io/docs/next/uglocalpv-device.html) 2.11.0 (stable) + - [cStor](https://docs.openebs.io/docs/next/cstor.html) 2.11.0 (beta) + - [Dynamic NFS Volume](https://github.com/openebs/dynamic-nfs-provisioner) 0.5.0 (alpha) +- Other components + - [NDM](https://github.com/openebs/node-disk-manager) 1.6.0 (beta) + +**Additional details:** +- [Release Notes](https://github.com/openebs/openebs/releases/tag/v2.11.0) +- [Install Steps](/docs/next/quickstart.html) +- [Upgrade Steps](/docs/next/upgrade.html) + +## 2.10.0 - Jun 15 2021 + OpenEBS v2.10 is another maintenance release before moving towards 3.0 primarily focusing on enhancing the E2E tests, build, release workflows, and documentation. This release also includes enhancements to improve the user experience and fixes for bugs reported by users and E2E tools. There has been some significant progress made on the alpha features as well. --- diff --git a/docs/ugcstor-csi.md b/docs/ugcstor-csi.md index 99add44c9..537f21eda 100644 --- a/docs/ugcstor-csi.md +++ b/docs/ugcstor-csi.md @@ -6,15 +6,10 @@ sidebar_label: cStor ------
-OpenEBS configuration flow - -
-
- -This user guide section provides the operations needed to be performed by the User and the Admin for configuring cStor related tasks. +This user guide will help you to configure cStor storage and use cStor Volumes for running your stateful workloads. :::note - The recommended approach to provision cStor Pools is to use CStorPoolCluster(CSPC), the detailed steps have been provided in this document. However, OpenEBS also supports provisioning of cStor Pools using StoragePoolClaim (SPC). For detailed instructions, refer to the cStor User guide(SPC).
+ If you are an existing user of cStor and have setup cStor storage using StoragePoolClaim(SPC), we strongly recommend you to migrate to using CStorPoolCluster(CSPC). CSPC based cStor uses Kubernetes CSI Driver, provides additional flexibility in how devices are used by cStor and has better resiliency against node failures. For detailed instructions, refer to the cStor SPC to CSPC migration guide.
::: @@ -1510,4 +1505,4 @@ Follow the steps below to cleanup of a cStor setup. On successful cleanup you ca blockdevice-10ad9f484c299597ed1e126d7b857967 worker-node-1 21474836480 Unclaimed Active 21m17s blockdevice-3ec130dc1aa932eb4c5af1db4d73ea1b worker-node-2 21474836480 Unclaimed Active 21m12s - ``` \ No newline at end of file + ``` diff --git a/docs/ugcstor.md b/docs/ugcstor.md index 58294f640..9d8df6666 100644 --- a/docs/ugcstor.md +++ b/docs/ugcstor.md @@ -1,12 +1,17 @@ --- id: ugcstor -title: cStor User Guide -sidebar_label: cStor +title: cStor Guide +sidebar_label: SPC based cStor guide --- + ------ - :::note - With OpenEBS 2.0, the recommended approach to provision cStor Pools is to use cStorPoolCluster(CSPC). For detailed instructions on how to get started with new cStor Operators please refer to the [Quickstart guide on Github](https://github.com/openebs/cstor-operators). + :::Deprecation Notice + cStor `out-of-tree external` provisioners will be deprecated by Dec 2021 in favor of the corresponding CSI Drivers. The out of tree provisioners for cStor will stop working from Kubernetes 1.22 and forward as the version of the custom resources used by those provisioners will be deprecated. We strongly recommend you plan for migrating your volumes to [cStor CSI](https://github.com/openebs/upgrade/blob/master/docs/migration.md) as early as possible. + + For detailed instructions on how to get started with new cStor Operators please refer here. + + If you have any questions or need help with the migration please reach out to us on our Kubernetes Community slack channel [#openebs](https://kubernetes.slack.com/archives/CUAKPFU78). :::
@@ -1258,11 +1263,11 @@ Below table lists the storage policies supported by cStor. These policies should | cStor Storage Policy | Mandatory | Default | Purpose | | ------------------------------------------------------------ | --------- | --------------------------------------- | ------------------------------------------------------------ | | [ReplicaCount](#Replica-Count-Policy) | No | 3 | Defines the number of cStor volume replicas | -| [VolumeControllerImage](#Volume-Controller-Image-Policy) | | openebs/cstor-volume-mgmt:2.10.0 | Dedicated side car for command management like taking snapshots etc. Can be used to apply a specific issue or feature for the workload | -| [VolumeTargetImage](#Volume-Target-Image-Policy) | | openebs/cstor-istgt:2.10.0 | iSCSI protocol stack dedicated to the workload. Can be used to apply a specific issue or feature for the workload | +| [VolumeControllerImage](#Volume-Controller-Image-Policy) | | openebs/cstor-volume-mgmt:2.11.0 | Dedicated side car for command management like taking snapshots etc. Can be used to apply a specific issue or feature for the workload | +| [VolumeTargetImage](#Volume-Target-Image-Policy) | | openebs/cstor-istgt:2.11.0 | iSCSI protocol stack dedicated to the workload. Can be used to apply a specific issue or feature for the workload | | [StoragePoolClaim](#Storage-Pool-Claim-Policy) | Yes | N/A (a valid pool must be provided) | The cStorPool on which the volume replicas should be provisioned | | [VolumeMonitor](#Volume-Monitor-Policy) | | ON | When ON, a volume exporter sidecar is launched to export Prometheus metrics. | -| [VolumeMonitorImage](#Volume-Monitoring-Image-Policy) | | openebs/m-exporter:2.10.0 | Used when VolumeMonitor is ON. A dedicated metrics exporter to the workload. Can be used to apply a specific issue or feature for the workload | +| [VolumeMonitorImage](#Volume-Monitoring-Image-Policy) | | openebs/m-exporter:2.11.0 | Used when VolumeMonitor is ON. A dedicated metrics exporter to the workload. Can be used to apply a specific issue or feature for the workload | | [FSType](#Volume-File-System-Type-Policy) | | ext4 | Specifies the filesystem that the volume should be formatted with. Other values are `xfs` | | [TargetNodeSelector](#Target-NodeSelector-Policy) | | Decided by Kubernetes scheduler | Specify the label in `key: value` format to notify Kubernetes scheduler to schedule cStor target pod on the nodes that match label | | [TargetResourceLimits](#Target-ResourceLimits-Policy) | | Decided by Kubernetes scheduler | CPU and Memory limits to cStor target pod | @@ -1326,7 +1331,7 @@ metadata: annotations: cas.openebs.io/config: | - name: VolumeControllerImage - value: openebs/cstor-volume-mgmt:2.10.0 + value: openebs/cstor-volume-mgmt:2.11.0 - name: StoragePoolClaim value: "cstor-disk-pool" openebs.io/cas-type: cstor @@ -1345,7 +1350,7 @@ metadata: annotations: cas.openebs.io/config: | - name: VolumeTargetImage - value:openebs/cstor-istgt:2.10.0 + value:openebs/cstor-istgt:2.11.0 - name: StoragePoolClaim value: "cstor-disk-pool" openebs.io/cas-type: cstor @@ -1383,7 +1388,7 @@ metadata: annotations: cas.openebs.io/config: | - name: VolumeMonitorImage - value: openebs/m-exporter:2.10.0 + value: openebs/m-exporter:2.11.0 - name: StoragePoolClaim value: "cstor-sparse-pool" openebs.io/cas-type: cstor diff --git a/website/sidebars.json b/website/sidebars.json index 3cd9d3c1e..45a74b373 100644 --- a/website/sidebars.json +++ b/website/sidebars.json @@ -70,7 +70,7 @@ "Deprecated": [ "releases-1x", "releases-0x", - "d-cStorguide" + "ugcstor" ] } } diff --git a/website/versions.json b/website/versions.json index 53bebf532..2414b01c0 100644 --- a/website/versions.json +++ b/website/versions.json @@ -1,6 +1,7 @@ { "versions": { - "2.10.0": ["v2.10.0", "https://docs.openebs.io/"], + "2.11.0": ["v2.10.0", "https://docs.openebs.io/"], + "2.10.0": ["v2.10.0", "https://docs.openebs.io/v2100"], "2.9.0": ["v2.9.0", "https://docs.openebs.io/v290/"], "2.8.0": ["v2.8.0", "https://docs.openebs.io/v280/"], "2.7.0": ["v2.7.0", "https://docs.openebs.io/v270/"], From 5c1a7fac29f707fa72641264bd427b43c9e7b361 Mon Sep 17 00:00:00 2001 From: kmova Date: Sat, 17 Jul 2021 11:21:59 +0530 Subject: [PATCH 02/10] feat(cstor): update cstor guide with install steps Signed-off-by: kmova --- docs/ugcstor-csi.md | 195 ++++++++++++++++++++++++++------------------ docs/ugcstor.md | 2 +- 2 files changed, 118 insertions(+), 79 deletions(-) diff --git a/docs/ugcstor-csi.md b/docs/ugcstor-csi.md index 537f21eda..b611a4abb 100644 --- a/docs/ugcstor-csi.md +++ b/docs/ugcstor-csi.md @@ -13,71 +13,93 @@ This user guide will help you to configure cStor storage and use cStor Volumes f ::: +

Operations Overview

+ +* Install and Setup + - [Pre-requisites](#prerequisites) + - [Creating cStor storage pools](#creating-cstor-storage-pool) + - [Creating cStor storage classes](#creating-cstor-storage-classes) +* Launch Sample Application + - [Deploying a sample application](#deploying-a-sample-application) +* Advanced Topics + - [Expanding a cStor volume](#expanding-a-cstor-volume) + - [Snapshot and Clone of a cStor Volume](#snapshot-and-clone-of-a-cstor-volume) + - [Scaling up cStor pools](#scaling-cstor-pools) + - [Block Device Tagging](#block-device-tagging) + - [Tuning cStor Pools](#tuning-cstor-pools) + - [Tuning cStor Volumes](#tuning-cstor-volumes) +* Clean up + - [Cleaning up a cStor setup](#cstor-cleanup) -

Operations

- -[Creating cStor storage pools](#creating-cstor-storage-pool) +--- -[Creating cStor storage classes](#creating-cstor-storage-classes) -[Deploying a sample application](#deploying-a-sample-application) +### Prerequisites -[Scaling up cStor pools](#scaling-cstor-pools) +- cStor uses the raw block devices attached to the Kubernetes worker nodes to create cStor Pools. Applications will connect to cStor volumes using `iSCSI`. This requires you ensure the following: + * There are raw (unformatted) block devices attached to the Kubernetes worker nodes. The devices can be either direct attached devices (SSD/HDD) or cloud volumes (GPD, EBS) + * `iscsi` utilities are installed on all the worker nodes where Stateful applications will be launched. The steps for setting up the iSCSI utilities might vary depending on your Kubernetes distribution. Please see [prerequisites verification](/docs/next/prerequisites.html) -[Snapshot and Clone of a cStor Volume](#snapshot-and-clone-of-a-cstor-volume) - -[Expanding a cStor volume](#expanding-a-cstor-volume) - -[Block Device Tagging](#block-device-tagging) - -[Performance Tunings in cStor Pools](#performance-tunings-in-cstor-pools) - -[Performance Tunings in cStor Volumes](#performance-tunings-in-cstor-volumes) +- If you are setting up OpenEBS in a new cluster. You can use one of the following steps to install OpenEBS. If OpenEBS is already installed, skip this step. -[Cleaning up a cStor setup](#cstor-cleanup) - - + Using helm, + ``` + helm repo add openebs https://openebs.github.io/charts + helm repo update + helm install openebs --namespace openebs openebs/openebs ---set cstor.enabled=true -create-namespace + ``` + The above command will install all the default OpenEBS components along with cStor. + Using kubectl, + ``` + kubectl apply -f https://openebs.github.io/charts/cstor-operator.yaml + ``` + The above command will install all the required components for running cStor. + +- Enable cStor on already existing OpenEBS + + Using helm, you can enable cStor on top of your openebs installation as follows: + ``` + helm ls -n openebs + # Note the release name used for OpenEBS + # Upgrade the helm by enabling cStor + # helm upgrade [helm-release-name] [helm-chart-name] flags + helm upgrade openebs openebs/openebs --set cstor.enabled=true --namespace openebs + ``` -## Operations - -### Creating cStor storage pools - - - Prerequisites: + Using kubectl, + ``` + kubectl apply -f https://openebs.github.io/charts/cstor-operator.yaml + ``` + +- Verify cStor and NDM pods are running in your cluster. + + To get the status of the pods execute: -- The latest release of OpenEBS cStor must be installed using cStor Operator yaml. + ``` + kubectl get pod -n openebs + ``` - ``` - kubectl apply -f https://openebs.github.io/charts/cstor-operator.yaml - ``` - - All the NDM cStor operator pods must be in a running state. To get the status of the pods execute: - - ``` - kubectl get pod -n openebs - ``` - - Sample Output: - ``` - NAME READY STATUS RESTARTS AGE - cspc-operator-5fb7db848f-wgnq8 1/1 Running 0 6d7h - cvc-operator-7f7d8dc4c5-sn7gv 1/1 Running 0 6d7h - openebs-cstor-admission-server-7585b9659b-rbkmn 1/1 Running 0 6d7h - openebs-cstor-csi-controller-0 6/6 Running 0 6d7h - openebs-cstor-csi-node-dl58c 2/2 Running 0 6d7h - openebs-cstor-csi-node-jmpzv 2/2 Running 0 6d7h - openebs-cstor-csi-node-tfv45 2/2 Running 0 6d7h - openebs-ndm-gctb7 1/1 Running 0 6d7h - openebs-ndm-operator-7c8759dbb5-58zpl 1/1 Running 0 6d7h - openebs-ndm-sfczv 1/1 Running 0 6d7h - openebs-ndm-vgdnv 1/1 Running 0 6d7h - ``` + Sample Output: + ``` + NAME READY STATUS RESTARTS AGE + cspc-operator-5fb7db848f-wgnq8 1/1 Running 0 6d7h + cvc-operator-7f7d8dc4c5-sn7gv 1/1 Running 0 6d7h + openebs-cstor-admission-server-7585b9659b-rbkmn 1/1 Running 0 6d7h + openebs-cstor-csi-controller-0 6/6 Running 0 6d7h + openebs-cstor-csi-node-dl58c 2/2 Running 0 6d7h + openebs-cstor-csi-node-jmpzv 2/2 Running 0 6d7h + openebs-cstor-csi-node-tfv45 2/2 Running 0 6d7h + openebs-ndm-gctb7 1/1 Running 0 6d7h + openebs-ndm-operator-7c8759dbb5-58zpl 1/1 Running 0 6d7h + openebs-ndm-sfczv 1/1 Running 0 6d7h + openebs-ndm-vgdnv 1/1 Running 0 6d7h + ``` - Nodes must have disks attached to them. To get the list of attached block devices, execute: - ``` - kubectl get bd -n openebs - ``` + ``` + kubectl get bd -n openebs + ``` Sample Output: ``` @@ -85,33 +107,12 @@ This user guide will help you to configure cStor storage and use cStor Volumes f blockdevice-01afcdbe3a9c9e3b281c7133b2af1b68 worker-node-3 21474836480 Unclaimed Active 2m10s blockdevice-10ad9f484c299597ed1e126d7b857967 worker-node-1 21474836480 Unclaimed Active 2m17s blockdevice-3ec130dc1aa932eb4c5af1db4d73ea1b worker-node-2 21474836480 Unclaimed Active 2m12s - - ``` - -Creating a CStorPoolCluster:
- -- Get all the node labels present in the cluster with the following command, these node labels will be required to modify the CSPC yaml. ``` - kubectl get node --show-labels - ``` - Sample Output: - ``` - NAME STATUS ROLES AGE VERSION LABELS - - master Ready master 5d2h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master= - - worker-node-1 Ready 5d2h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-node-1,kubernetes.io/os=linux - worker-node-2 Ready 5d2h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-node-2,kubernetes.io/os=linux - worker-node-3 Ready 5d2h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-node-3,kubernetes.io/os=linux - ``` +### Creating cStor storage pools -- Modify the CSPC yaml to use the worker nodes. Use the value from labels kubernetes.io/hostname=< node_name >. This label value and node name could be different in some platforms. In this case, the label values and node names are: - kubernetes.io/hostname:"worker-node-1", kubernetes.io/hostname: "worker-node-2" and kubernetes.io/hostname: "worker-node-3". -- Modify CSPC yaml file to add a block device attached to the same node where the pool is to be provisioned.

-**Note:** The dataRaidGroupType: can either be set as stripe or mirror as per your requirement. In the following example it is configured as stripe.

- Sample CSPC yaml: +- You will need to create a Kubernetes Custom Resource called CStorPoolCluster, that includes details of the nodes and the devices on those nodes that must be used to setup cStor. You can start by copying the following Sample CSPC yaml into a file named `cspc.yaml`. ``` apiVersion: cstor.openebs.io/v1 @@ -145,6 +146,44 @@ spec: poolConfig: dataRaidGroupType: "stripe" ``` + +- Get all the node labels present in the cluster with the following command, these node labels will be required to modify the CSPC yaml. + ``` + kubectl get node --show-labels + ``` + + Sample Output: + ``` + NAME STATUS ROLES AGE VERSION LABELS + + master Ready master 5d2h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master= + + worker-node-1 Ready 5d2h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-node-1,kubernetes.io/os=linux + + worker-node-2 Ready 5d2h v1.20.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-node-2,kubernetes.io/os=linux + + worker-node-3 Ready 5d2h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-node-3,kubernetes.io/os=linux + ``` + +- Modify the CSPC yaml to use the worker nodes. Use the value from labels kubernetes.io/hostname=< node_name >. This label value and node name could be different in some platforms. In this case, the label values and node names are: + kubernetes.io/hostname:"worker-node-1", kubernetes.io/hostname: "worker-node-2" and kubernetes.io/hostname: "worker-node-3". + +- Modify CSPC yaml file to specify the block device attached to the above selected node where the pool is to be provisioned. You can use the following command to get the available block devices on each of the worker node: + ``` + kubectl get bd -n openebs + ``` + Sample Output: + + ``` + NAME NODENAME SIZE CLAIMSTATE STATUS AGE + blockdevice-01afcdbe3a9c9e3b281c7133b2af1b68 worker-node-3 21474836480 Unclaimed Active 2m10s + blockdevice-10ad9f484c299597ed1e126d7b857967 worker-node-1 21474836480 Unclaimed Active 2m17s + blockdevice-3ec130dc1aa932eb4c5af1db4d73ea1b worker-node-2 21474836480 Unclaimed Active 2m12s + ``` + +- The dataRaidGroupType: can either be set as stripe or mirror as per your requirement. In the following example it is configured as stripe. + +``` We have named the configuration YAML file as cspc.yaml. Execute the following command for CSPC creation, ``` @@ -792,7 +831,7 @@ Note that CSPI for node worker-node-3 is not created because:
-### Performance Tunings in cStor Pools +### Tuning cStor Pools Allow users to set available performance tunings in cStor Pools based on their workload. cStor pool(s) can be tuned via CSPC and is the recommended way to do it. Below are the tunings that can be applied:
@@ -1094,7 +1133,7 @@ spec: ```
-### Performance Tunings in cStor Volumes +### Tuning cStor Volumes Similar to tuning of the cStor Pool cluster, there are possible ways for tuning cStor volumes. cStor volumes can be provisioned using different policy configurations. However, cStorVolumePolicy needs to be created first. It must be created prior to creation of StorageClass as CStorVolumePolicy name needs to be specified to provision cStor volume based on configured policy. A sample StorageClass YAML that utilises cstorVolumePolicy is given below for reference:
``` diff --git a/docs/ugcstor.md b/docs/ugcstor.md index 9d8df6666..6a8a0e980 100644 --- a/docs/ugcstor.md +++ b/docs/ugcstor.md @@ -6,7 +6,7 @@ sidebar_label: SPC based cStor guide ------ - :::Deprecation Notice + :::note Deprecation Notice cStor `out-of-tree external` provisioners will be deprecated by Dec 2021 in favor of the corresponding CSI Drivers. The out of tree provisioners for cStor will stop working from Kubernetes 1.22 and forward as the version of the custom resources used by those provisioners will be deprecated. We strongly recommend you plan for migrating your volumes to [cStor CSI](https://github.com/openebs/upgrade/blob/master/docs/migration.md) as early as possible. For detailed instructions on how to get started with new cStor Operators please refer here. From 7963b5c0426ec26182b275c36d216fda8ab2ef9a Mon Sep 17 00:00:00 2001 From: kmova Date: Sat, 17 Jul 2021 11:31:56 +0530 Subject: [PATCH 03/10] feat(cstor): update cstor guide with install steps Signed-off-by: kmova --- docs/ugcstor-csi.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/ugcstor-csi.md b/docs/ugcstor-csi.md index b611a4abb..0fdac7e72 100644 --- a/docs/ugcstor-csi.md +++ b/docs/ugcstor-csi.md @@ -15,20 +15,20 @@ This user guide will help you to configure cStor storage and use cStor Volumes f

Operations Overview

-* Install and Setup +

Install and Setup

- [Pre-requisites](#prerequisites) - [Creating cStor storage pools](#creating-cstor-storage-pool) - [Creating cStor storage classes](#creating-cstor-storage-classes) -* Launch Sample Application +

Launch Sample Application

- [Deploying a sample application](#deploying-a-sample-application) -* Advanced Topics +

Advanced Topics

- [Expanding a cStor volume](#expanding-a-cstor-volume) - [Snapshot and Clone of a cStor Volume](#snapshot-and-clone-of-a-cstor-volume) - [Scaling up cStor pools](#scaling-cstor-pools) - [Block Device Tagging](#block-device-tagging) - [Tuning cStor Pools](#tuning-cstor-pools) - [Tuning cStor Volumes](#tuning-cstor-volumes) -* Clean up +

Clean up

- [Cleaning up a cStor setup](#cstor-cleanup) --- @@ -112,7 +112,7 @@ This user guide will help you to configure cStor storage and use cStor Volumes f ### Creating cStor storage pools -- You will need to create a Kubernetes Custom Resource called CStorPoolCluster, that includes details of the nodes and the devices on those nodes that must be used to setup cStor. You can start by copying the following Sample CSPC yaml into a file named `cspc.yaml`. +You will need to create a Kubernetes custom resource called CStorPoolCluster, specifying the details of the nodes and the devices on those nodes that must be used to setup cStor pools. You can start by copying the following Sample CSPC yaml into a file named `cspc.yaml` and modifying it with details from your cluster. ``` apiVersion: cstor.openebs.io/v1 From bdacbcad20f926bebea2de7e1a66f534ba2b542d Mon Sep 17 00:00:00 2001 From: kmova Date: Sat, 17 Jul 2021 12:26:10 +0530 Subject: [PATCH 04/10] feat(cstor): update cstor guide with install steps Signed-off-by: kmova --- docs/ugcstor-csi.md | 91 ++++++++++++++++++++++++--------------------- 1 file changed, 49 insertions(+), 42 deletions(-) diff --git a/docs/ugcstor-csi.md b/docs/ugcstor-csi.md index 0fdac7e72..97464aecb 100644 --- a/docs/ugcstor-csi.md +++ b/docs/ugcstor-csi.md @@ -16,23 +16,28 @@ This user guide will help you to configure cStor storage and use cStor Volumes f

Operations Overview

Install and Setup

- - [Pre-requisites](#prerequisites) - - [Creating cStor storage pools](#creating-cstor-storage-pool) - - [Creating cStor storage classes](#creating-cstor-storage-classes) +- [Pre-requisites](#prerequisites) +- [Creating cStor storage pools](#creating-cstor-storage-pool) +- [Creating cStor storage classes](#creating-cstor-storage-classes) +

Launch Sample Application

- - [Deploying a sample application](#deploying-a-sample-application) +- [Deploying a sample application](#deploying-a-sample-application) +

Advanced Topics

- - [Expanding a cStor volume](#expanding-a-cstor-volume) - - [Snapshot and Clone of a cStor Volume](#snapshot-and-clone-of-a-cstor-volume) - - [Scaling up cStor pools](#scaling-cstor-pools) - - [Block Device Tagging](#block-device-tagging) - - [Tuning cStor Pools](#tuning-cstor-pools) - - [Tuning cStor Volumes](#tuning-cstor-volumes) +- [Expanding a cStor volume](#expanding-a-cstor-volume) +- [Snapshot and Clone of a cStor Volume](#snapshot-and-clone-of-a-cstor-volume) +- [Scaling up cStor pools](#scaling-cstor-pools) +- [Block Device Tagging](#block-device-tagging) +- [Tuning cStor Pools](#tuning-cstor-pools) +- [Tuning cStor Volumes](#tuning-cstor-volumes) +

Clean up

- - [Cleaning up a cStor setup](#cstor-cleanup) - ---- +- [Cleaning up a cStor setup](#cstor-cleanup) +
+
+
+
### Prerequisites @@ -46,7 +51,7 @@ This user guide will help you to configure cStor storage and use cStor Volumes f ``` helm repo add openebs https://openebs.github.io/charts helm repo update - helm install openebs --namespace openebs openebs/openebs ---set cstor.enabled=true -create-namespace + helm install openebs --namespace openebs openebs/openebs --set cstor.enabled=true --create-namespace ``` The above command will install all the default OpenEBS components along with cStor. @@ -183,28 +188,28 @@ spec: - The dataRaidGroupType: can either be set as stripe or mirror as per your requirement. In the following example it is configured as stripe. -``` We have named the configuration YAML file as cspc.yaml. Execute the following command for CSPC creation, - ``` kubectl apply -f cspc.yaml ``` + To verify the status of created CSPC, execute: ``` kubectl get cspc -n openebs ``` + Sample Output: ``` NAME HEALTHYINSTANCES PROVISIONEDINSTANCES DESIREDINSTANCES AGE cstor-disk-pool 3 3 3 2m2s ``` + Check if the pool instances report their status as ONLINE using the below command: - ``` kubectl get cspi -n openebs ``` + Sample Output: - ``` NAME HOSTNAME ALLOCATED FREE CAPACITY STATUS AGE cstor-disk-pool-vn92 worker-node-1 60k 9900M 9900M ONLINE 2m17s @@ -212,35 +217,37 @@ cstor-disk-pool-al65 worker-node-2 60k 9900M 9900M ONLINE cstor-disk-pool-y7pn worker-node-3 60k 9900M 9900M ONLINE 2m17s ``` Once all the pods are in running state, these pool instances can be used for creation of cStor volumes. +
### Creating cStor storage classes StorageClass definition is an important task in the planning and execution of OpenEBS storage. The real power of CAS architecture is to give an independent or a dedicated storage engine like cStor for each workload, so that granular policies can be applied to that storage engine to tune the behaviour or performance as per the workload's need. - #### Steps to create a cStor StorageClass: - 1. Decide the CStorPoolCluster for which you want to create a Storage Class. - 2. Decide the replicaCount based on your requirement/workloads. OpenEBS doesn't restrict the replica count to set, but a maximum of 5 replicas are allowed. It depends how users configure it, but for the availability of volumes at least (n/2 + 1) replicas should be up and connected to the target, where n is the replicaCount. The Replica Count should be always less than or equal to the number of cStor Pool Instances(CSPIs). The following are some example cases: -
    -
  • If a user configured replica count as 2, then always 2 replicas should be available to perform operations on volume.
  • -
  • If a user configured replica count as 3 it should require at least 2 replicas should be available for volume to be operational.
  • -
  • If a user configured replica count as 5 it should require at least 3 replicas should be available for volume to be operational.
  • -
- 3. Create a YAML spec file cstor-csi-disk.yaml using the template given below. Update the pool, replica count and other policies. By using this sample configuration YAML, a StorageClass will be created with 3 OpenEBS cStor replicas and will configure themselves on the pool instances. + +#### Steps to create a cStor StorageClass: + +1. Decide the CStorPoolCluster for which you want to create a Storage Class. Let us say you pick up `cstor-disk-pool` that you created in the above step. +2. Decide the replicaCount based on your requirement/workloads. OpenEBS doesn't restrict the replica count to set, but a maximum of 5 replicas are allowed. It depends how users configure it, but for the availability of volumes at least (n/2 + 1) replicas should be up and connected to the target, where n is the replicaCount. The Replica Count should be always less than or equal to the number of cStor Pool Instances(CSPIs). The following are some example cases: + - If a user configured replica count as 2, then always 2 replicas should be available to perform operations on volume. + - If a user configured replica count as 3 it should require at least 2 replicas should be available for volume to be operational. + - If a user configured replica count as 5 it should require at least 3 replicas should be available for volume to be operational. +3. Create a YAML spec file cstor-csi-disk.yaml using the template given below. Update the pool, replica count and other policies. By using this sample configuration YAML, a StorageClass will be created with 3 OpenEBS cStor replicas and will configure themselves on the pool instances. - ``` - kind: StorageClass - apiVersion: storage.k8s.io/v1 - metadata: - name: cstor-csi-disk - provisioner: cstor.csi.openebs.io - allowVolumeExpansion: true - parameters: - cas-type: cstor - # cstorPoolCluster should have the name of the CSPC - cstorPoolCluster: cstor-disk-pool - # replicaCount should be <= no. of CSPI - replicaCount: "3" - ``` +``` +kind: StorageClass +apiVersion: storage.k8s.io/v1 +metadata: + name: cstor-csi-disk +provisioner: cstor.csi.openebs.io +allowVolumeExpansion: true +parameters: + cas-type: cstor + # cstorPoolCluster should have the name of the CSPC + cstorPoolCluster: cstor-disk-pool + # replicaCount should be <= no. of CSPI created in the selected CSPC + replicaCount: "3" +``` + To deploy the YAML, execute: ``` kubectl apply -f cstor-csi-disk.yaml @@ -257,7 +264,7 @@ cstor-csi cstor.csi.openebs.io ```
- ### Deploying a sample application +### Deploying a sample application To deploy a sample application using the above created CSPC and StorageClass, a PVC, that utilises the created StorageClass, needs to be deployed. Given below is an example YAML for a PVC which uses the SC created earlier. From 11984373182a8a9482c7e7ea5e2803dd9279c980 Mon Sep 17 00:00:00 2001 From: kmova Date: Sat, 17 Jul 2021 12:49:51 +0530 Subject: [PATCH 05/10] feat(cstor): update cstor guide with install steps Signed-off-by: kmova --- docs/ugcstor-csi.md | 58 ++++++++++++++++++++++++++------------------- 1 file changed, 33 insertions(+), 25 deletions(-) diff --git a/docs/ugcstor-csi.md b/docs/ugcstor-csi.md index 97464aecb..52e38d3f8 100644 --- a/docs/ugcstor-csi.md +++ b/docs/ugcstor-csi.md @@ -13,7 +13,7 @@ This user guide will help you to configure cStor storage and use cStor Volumes f ::: -

Operations Overview

+## Operations Overview

Install and Setup

- [Pre-requisites](#prerequisites) @@ -39,7 +39,7 @@ This user guide will help you to configure cStor storage and use cStor Volumes f

-### Prerequisites +## Prerequisites - cStor uses the raw block devices attached to the Kubernetes worker nodes to create cStor Pools. Applications will connect to cStor volumes using `iSCSI`. This requires you ensure the following: * There are raw (unformatted) block devices attached to the Kubernetes worker nodes. The devices can be either direct attached devices (SSD/HDD) or cloud volumes (GPD, EBS) @@ -69,7 +69,7 @@ This user guide will help you to configure cStor storage and use cStor Volumes f # Note the release name used for OpenEBS # Upgrade the helm by enabling cStor # helm upgrade [helm-release-name] [helm-chart-name] flags - helm upgrade openebs openebs/openebs --set cstor.enabled=true --namespace openebs + helm upgrade openebs openebs/openebs --set cstor.enabled=true --reuse-values --namespace openebs ``` Using kubectl, @@ -115,7 +115,7 @@ This user guide will help you to configure cStor storage and use cStor Volumes f ``` -### Creating cStor storage pools +## Creating cStor storage pools You will need to create a Kubernetes custom resource called CStorPoolCluster, specifying the details of the nodes and the devices on those nodes that must be used to setup cStor pools. You can start by copying the following Sample CSPC yaml into a file named `cspc.yaml` and modifying it with details from your cluster. @@ -220,11 +220,11 @@ Once all the pods are in running state, these pool instances can be used for cre
-### Creating cStor storage classes +## Creating cStor storage classes StorageClass definition is an important task in the planning and execution of OpenEBS storage. The real power of CAS architecture is to give an independent or a dedicated storage engine like cStor for each workload, so that granular policies can be applied to that storage engine to tune the behaviour or performance as per the workload's need. -#### Steps to create a cStor StorageClass: +Steps to create a cStor StorageClass 1. Decide the CStorPoolCluster for which you want to create a Storage Class. Let us say you pick up `cstor-disk-pool` that you created in the above step. 2. Decide the replicaCount based on your requirement/workloads. OpenEBS doesn't restrict the replica count to set, but a maximum of 5 replicas are allowed. It depends how users configure it, but for the availability of volumes at least (n/2 + 1) replicas should be up and connected to the target, where n is the replicaCount. The Replica Count should be always less than or equal to the number of cStor Pool Instances(CSPIs). The following are some example cases: @@ -264,7 +264,7 @@ cstor-csi cstor.csi.openebs.io ```
-### Deploying a sample application +## Deploying a sample application To deploy a sample application using the above created CSPC and StorageClass, a PVC, that utilises the created StorageClass, needs to be deployed. Given below is an example YAML for a PVC which uses the SC created earlier. @@ -337,7 +337,7 @@ Fri May 28 05:00:31 UTC 2021 ```
-### Scaling cStor pools +## Scaling cStor pools Once the cStor storage pools are created you can scale-up your existing cStor pool. To scale-up the pool size, you need to edit the CSPC YAML that was used for creation of CStorPoolCluster. @@ -348,7 +348,7 @@ Scaling up can done by two methods: **Note:** The dataRaidGroupType: can either be set as stripe or mirror as per your requirement. In the following example it is configured as stripe. -#### Adding new nodes(with new disks) to the existing CSPC +### Adding new nodes(with new disks) to the existing CSPC A new node spec needs to be added to previously deployed YAML, ``` @@ -419,7 +419,7 @@ cspc-disk-pool-rt4k worker-node-4 28800M 28800056k false ONLINE ``` As a result of this, we can see that a new pool have been added, increasing the number of pools to 4 -#### Adding new disks to existing nodes +### Adding new disks to existing nodes A new blockDeviceName under blockDevices needs to be added to previously deployed YAML. Execute the following command to edit the CSPC, ``` kubectl edit cspc -n openebs cstor-disk-pool @@ -464,13 +464,14 @@ spec:
-### Snapshot and Clone of a cStor Volume +## Snapshot and Clone of a cStor Volume An OpenEBS snapshot is a set of reference markers for data at a particular point in time. A snapshot act as a detailed table of contents, with accessible copies of data that user can roll back to the required point of instance. Snapshots in OpenEBS are instantaneous and are managed through kubectl. During the installation of OpenEBS, a snapshot-controller and a snapshot-provisioner are setup which assist in taking the snapshots. During the snapshot creation, snapshot-controller creates VolumeSnapshot and VolumeSnapshotData custom resources. A snapshot-provisioner is used to restore a snapshot as a new Persistent Volume(PV) via dynamic provisioning. - #### Creating a cStor volume Snapshot - + +### Creating a cStor volume Snapshot + 1. Before proceeding to create a cStor volume snapshot and use it further for restoration, it is necessary to create a VolumeSnapshotClass. Copy the following YAML specification into a file called snapshot_class.yaml. ``` kind: VolumeSnapshotClass @@ -547,7 +548,8 @@ Status: The SnapshotContentName identifies the VolumeSnapshotContent object which serves this snapshot. The Ready To Use parameter indicates that the Snapshot has been created successfully and can be used to create a new PVC. **Note:** All cStor snapshots should be created in the same namespace of source PVC. -#### Cloning a cStor Snapshot + +### Cloning a cStor Snapshot Once the snapshot is created, you can use it to create a PVC. In order to restore a specific snapshot, you need to create a new PVC that refers to the snapshot. Below is an example of a YAML file that restores and creates a PVC from a snapshot. @@ -581,7 +583,7 @@ restore-cstor-pvc Bound pvc-2f2d65fc-0784-11ea-b887-42010a80006c ```
-### Expanding a cStor volume +## Expanding a cStor volume OpenEBS cStor introduces support for expanding a PersistentVolume using the CSI provisioner. Provided cStor is configured to function as a CSI provisioner, you can expand PVs that have been created by cStor CSI Driver. This feature is supported with Kubernetes versions 1.16 and above.
@@ -737,7 +739,7 @@ pvc-849bd646-6d3f-4a87-909e-2416d4e00904 10Gi RWO Delete ```
-### Block Device Tagging +## Block Device Tagging NDM provides you with an ability to reserve block devices to be used for specific applications via adding tag(s) to your block device(s). This feature can be used by cStor operators to specify the block devices which should be consumed by cStor pools and conversely restrict anyone else from using those block devices. This helps in protecting against manual errors in specifying the block devices in the CSPC yaml by users. 1. Consider the following block devices in a Kubernetes cluster, they will be used to provision a storage pool. List the labels added to these block devices, @@ -838,7 +840,7 @@ Note that CSPI for node worker-node-3 is not created because:
-### Tuning cStor Pools +## Tuning cStor Pools Allow users to set available performance tunings in cStor Pools based on their workload. cStor pool(s) can be tuned via CSPC and is the recommended way to do it. Below are the tunings that can be applied:
@@ -1140,7 +1142,7 @@ spec: ```
-### Tuning cStor Volumes +## Tuning cStor Volumes Similar to tuning of the cStor Pool cluster, there are possible ways for tuning cStor volumes. cStor volumes can be provisioned using different policy configurations. However, cStorVolumePolicy needs to be created first. It must be created prior to creation of StorageClass as CStorVolumePolicy name needs to be specified to provision cStor volume based on configured policy. A sample StorageClass YAML that utilises cstorVolumePolicy is given below for reference:
``` @@ -1255,7 +1257,7 @@ The list of policies that can be configured are as follows: -#### Replica Affinity to create a volume replica on specific pool +### Replica Affinity to create a volume replica on specific pool For StatefulSet applications, to distribute single replica volume on specific cStor pool we can use replicaAffinity enabled scheduling. This feature should be used with delay volume binding i.e. volumeBindingMode: WaitForFirstConsumer in StorageClass. When volumeBindingMode is set to WaitForFirstConsumer the csi-provisioner waits for the scheduler to select a node. The topology of the selected node will then be set as the first entry in preferred list and will be used by the volume controller to create the volume replica on the cstor pool scheduled on preferred node. ``` @@ -1287,7 +1289,7 @@ spec: ``` -#### Volume Target Pod Affinity +### Volume Target Pod Affinity The Stateful workloads access the OpenEBS storage volume by connecting to the Volume Target Pod. Target Pod Affinity policy can be used to co-locate volume target pod on the same node as the workload. This feature makes use of the Kubernetes Pod Affinity feature that is dependent on the Pod labels. For this labels need to be added to both, Application and volume Policy. Given below is a sample YAML of CStorVolumePolicy having target-affinity label using kubernetes.io/hostname as a topologyKey in CStorVolumePolicy: @@ -1325,7 +1327,7 @@ metadata: ``` -#### Volume Tunable +### Volume Tunable Performance tunings based on the workload can be set using Volume Policy. The list of tunings that can be configured are given below: - queueDepth:
@@ -1350,7 +1352,7 @@ spec: ``` **Note:**These Policy tunable configurations can be changed for already provisioned volumes by editing the corresponding volume CStorVolumeConfig resources. -#### Memory and CPU Resources QoS +### Memory and CPU Resources QoS CStorVolumePolicy can also be used to configure the volume Target pod resource requests and limits to ensure QoS. Given below is a sample YAML that configures the target container's resource requests and limits, and auxResources configuration for the sidecar containers. To know more about Resource configuration in Kubernetes, click here. @@ -1405,7 +1407,7 @@ To apply the patch, ``` kubectl patch cvc -n openebs -p "$(cat patch-resources-cvc.yaml)" pvc-0478b13d-b1ef-4cff-813e-8d2d13bcb316 --type merge ``` -#### Toleration for target pod to ensure scheduling of target pods on tainted nodes +### Toleration for target pod to ensure scheduling of target pods on tainted nodes This Kubernetes feature allows users to taint the node. This ensures no pods are be scheduled to it, unless a pod explicitly tolerates the taint. This Kubernetes feature can be used to reserve nodes for specific pods by adding labels to the desired node(s). @@ -1429,7 +1431,7 @@ spec: effect: "NoSchedule" ``` -#### Priority class for volume target deployment +### Priority class for volume target deployment Priority classes can help in controlling the Kubernetes schedulers decisions to favor higher priority pods over lower priority pods. The Kubernetes scheduler can even preempt lower priority pods that are running, so that pending higher priority pods can be scheduled. Setting pod priority also prevents lower priority workloads from impacting critical workloads in the cluster, especially in cases where the cluster starts to reach its resource capacity. To know more about PriorityClasses in Kubernetes, click here. @@ -1448,7 +1450,8 @@ spec: target: priorityClassName: "storage-critical" ``` -### Cleaning up a cStor setup + +## Cleaning up a cStor setup Follow the steps below to cleanup of a cStor setup. On successful cleanup you can reuse the cluster's disks/block devices for other storage engines. @@ -1552,3 +1555,8 @@ Follow the steps below to cleanup of a cStor setup. On successful cleanup you ca blockdevice-3ec130dc1aa932eb4c5af1db4d73ea1b worker-node-2 21474836480 Unclaimed Active 21m12s ``` +## Troubleshooting + +* The volumes remains in `Init` state, even though pool pods are running. This can happen due to the pool pods failing to connect to Kubernetes API server. Check the logs of cstor pool pods. Restarting the pool pod can fix this issue. This is seen to happen in cases where cstor control plane is deleted and re-installed, while the pool pods were running. + + From 9e7abbfae81f7ad443f522c66817307af31078b5 Mon Sep 17 00:00:00 2001 From: kmova Date: Sat, 17 Jul 2021 13:31:15 +0530 Subject: [PATCH 06/10] feat(cstor): update cstor guide with install steps Signed-off-by: kmova --- docs/ugcstor-csi.md | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/docs/ugcstor-csi.md b/docs/ugcstor-csi.md index 52e38d3f8..60dee7d6b 100644 --- a/docs/ugcstor-csi.md +++ b/docs/ugcstor-csi.md @@ -9,20 +9,23 @@ sidebar_label: cStor This user guide will help you to configure cStor storage and use cStor Volumes for running your stateful workloads. :::note - If you are an existing user of cStor and have setup cStor storage using StoragePoolClaim(SPC), we strongly recommend you to migrate to using CStorPoolCluster(CSPC). CSPC based cStor uses Kubernetes CSI Driver, provides additional flexibility in how devices are used by cStor and has better resiliency against node failures. For detailed instructions, refer to the cStor SPC to CSPC migration guide.
+ If you are an existing user of cStor and have [setup cStor storage using StoragePoolClaim(SPC)](docs/next/ugcstor.html), we strongly recommend you to migrate to using CStorPoolCluster(CSPC). CSPC based cStor uses Kubernetes CSI Driver, provides additional flexibility in how devices are used by cStor and has better resiliency against node failures. For detailed instructions, refer to the cStor SPC to CSPC migration guide.
::: -## Operations Overview +## Content Overview

Install and Setup

- [Pre-requisites](#prerequisites) -- [Creating cStor storage pools](#creating-cstor-storage-pool) +- [Creating cStor storage pools](#creating-cstor-storage-pools) - [Creating cStor storage classes](#creating-cstor-storage-classes)

Launch Sample Application

- [Deploying a sample application](#deploying-a-sample-application) +

Troubleshooting

+- [Troubleshooting cStor setup](#troubleshooting) +

Advanced Topics

- [Expanding a cStor volume](#expanding-a-cstor-volume) - [Snapshot and Clone of a cStor Volume](#snapshot-and-clone-of-a-cstor-volume) @@ -32,7 +35,7 @@ This user guide will help you to configure cStor storage and use cStor Volumes f - [Tuning cStor Volumes](#tuning-cstor-volumes)

Clean up

-- [Cleaning up a cStor setup](#cstor-cleanup) +- [Cleaning up a cStor setup](#cleaning-up-a-cstor-setup)

@@ -266,7 +269,7 @@ cstor-csi cstor.csi.openebs.io ## Deploying a sample application -To deploy a sample application using the above created CSPC and StorageClass, a PVC, that utilises the created StorageClass, needs to be deployed. Given below is an example YAML for a PVC which uses the SC created earlier. +To deploy a sample application using the above created CSPC and StorageClass, a PVC, that utilises the created StorageClass, needs to be deployed. Given below is an example YAML for a PVC which uses the SC created earlier. ``` kind: PersistentVolumeClaim From 564d07efeb7926619cf253c8973b00dd562b7518 Mon Sep 17 00:00:00 2001 From: kmova Date: Sat, 17 Jul 2021 22:18:29 +0530 Subject: [PATCH 07/10] update 2.11 enhancements Signed-off-by: kmova --- docs/releases.md | 13 ++++++++++++- docs/upgrade.md | 6 +++--- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/docs/releases.md b/docs/releases.md index d87a5c92c..de5ba53f5 100644 --- a/docs/releases.md +++ b/docs/releases.md @@ -20,11 +20,22 @@ If you have any questions or need help with the migration please reach out to us --- ### Key Improvements -- [kubectl plugin for openebs](https://github.com/openebs/openebsctl/blob/develop/README.md) has been enhanced to provide additional information about OpenEBS storage components like: +- [CLI](https://github.com/openebs/openebsctl/blob/develop/README.md) has been enhanced to provide additional information about OpenEBS storage components like: - Block devices managed by OpenEBS (`kubectl openebs get bd`) - Jiva Volumes - LVM Local PV Volumes - ZFS Local PV Volumes +- [Monitoring helm chart](https://github.com/openebs/monitoring) has been updated to include a workload dashboard for LVM Local PV. In addition new alert rules related to PVC status are supported. +- [LVM Local PV](https://github.com/openebs/lvm-localpv) has been enhanced to allow users to configure the size that should be reserved for LVM snapshots. By default, the size reserved for a snapshot is equal to the size of the volume. In cases, where snapshots are created for backup purposes, the snapshots may not require the entire space. This feature helps in creating snapshots on VGs that don't have enough space to reserve full capacity for snapshots. +- [LVM Local PV](https://github.com/openebs/lvm-localpv) now allows for specifying the custom topology keys via ENV variable, instead of caching the values on restart. The caching can result in driver storing an out-dated topology key, if the administrator misses to restart the driver pods. Specifying the topology key via ENV allows users to know the current key and if there is a change, requires for a ENV modification that will force a restart of all the drivers. +- [NFS Provisioner](https://github.com/openebs/dynamic-nfs-provisioner) has been updated with several new features like: + - Ability to configure the LeaseTime and GraceTime for the NFS server to tune the restart times + - Added a prometheus metrics end point to report volume creation and failure events + - Added a configuration option to allow users to specify the GUID to set on the NFS server to allow non-root applications to access NFS share + - Allow specifying a different namespace than provisioner namespace to create NFS volume related objects. + - Allow specifying the node affinity for NFS server deployment +- [Rawfile Local PV](https://github.com/openebs/rawfile-localpv) has been enhanced to support xfs filesystem. + ### Key Bug Fixes diff --git a/docs/upgrade.md b/docs/upgrade.md index 53a457d92..14e28b359 100644 --- a/docs/upgrade.md +++ b/docs/upgrade.md @@ -6,15 +6,15 @@ sidebar_label: Upgrade ------ -Latest stable version of OpenEBS is 2.10.0. Check the release notes [here](https://github.com/openebs/openebs/releases/tag/v2.10.0). +Latest stable version of OpenEBS is 2.11.0. Check the release notes [here](https://github.com/openebs/openebs/releases/tag/v2.11.0). -Upgrade to the latest OpenEBS 2.10.0 version is supported only from 1.0.0 and later. The steps for upgrading from can be found [here](https://github.com/openebs/openebs/blob/master/k8s/upgrades/README.md). +Upgrade to the latest OpenEBS 2.11.0 version is supported only from 1.0.0 and later. The steps for upgrading from can be found [here](https://github.com/openebs/openebs/blob/master/k8s/upgrades/README.md). :::note -The community e2e pipelines verify upgrade testing only from non-deprecated releases (1.8 and higher) to 2.10. If you are running on release older than 1.8, OpenEBS recommends you upgrade to the latest version as soon as possible. +The community e2e pipelines verify upgrade testing only from non-deprecated releases (1.10 and higher) to 2.10. If you are running on release older than 1.10, OpenEBS recommends you upgrade to the latest version as soon as possible. ::: :::note From 7b0eacbc5c3ec24d6569b489abcf46aa011da921 Mon Sep 17 00:00:00 2001 From: kmova Date: Sun, 18 Jul 2021 09:45:10 +0530 Subject: [PATCH 08/10] update bugfixes from 2.11 Signed-off-by: kmova --- docs/releases.md | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/docs/releases.md b/docs/releases.md index de5ba53f5..6cd50fb8e 100644 --- a/docs/releases.md +++ b/docs/releases.md @@ -20,14 +20,14 @@ If you have any questions or need help with the migration please reach out to us --- ### Key Improvements -- [CLI](https://github.com/openebs/openebsctl/blob/develop/README.md) has been enhanced to provide additional information about OpenEBS storage components like: +- Enhanced [CLI](https://github.com/openebs/openebsctl/blob/develop/README.md) to provide additional information about OpenEBS storage components like: - Block devices managed by OpenEBS (`kubectl openebs get bd`) - Jiva Volumes - LVM Local PV Volumes - ZFS Local PV Volumes -- [Monitoring helm chart](https://github.com/openebs/monitoring) has been updated to include a workload dashboard for LVM Local PV. In addition new alert rules related to PVC status are supported. -- [LVM Local PV](https://github.com/openebs/lvm-localpv) has been enhanced to allow users to configure the size that should be reserved for LVM snapshots. By default, the size reserved for a snapshot is equal to the size of the volume. In cases, where snapshots are created for backup purposes, the snapshots may not require the entire space. This feature helps in creating snapshots on VGs that don't have enough space to reserve full capacity for snapshots. -- [LVM Local PV](https://github.com/openebs/lvm-localpv) now allows for specifying the custom topology keys via ENV variable, instead of caching the values on restart. The caching can result in driver storing an out-dated topology key, if the administrator misses to restart the driver pods. Specifying the topology key via ENV allows users to know the current key and if there is a change, requires for a ENV modification that will force a restart of all the drivers. +- Added a new Stateful workload dashboard to [Monitoring helm chart](https://github.com/openebs/monitoring) to display the CPU, RAM and Filesystem stats of a given Pod. This dashboard currently supports fetching details for LVM Local PV. In addition new alert rules related to PVC status are supported. +- Enhanced [LVM Local PV](https://github.com/openebs/lvm-localpv) snapshot support by allowing users to configure the size that should be reserved for LVM snapshots. By default, the size reserved for a snapshot is equal to the size of the volume. In cases, where snapshots are created for backup purposes, the snapshots may not require the entire space. This feature helps in creating snapshots on VGs that don't have enough space to reserve full capacity for snapshots. +- Enhanced the way custom topology keys can be specified for [LVM Local PV](https://github.com/openebs/lvm-localpv). Prior to this enhancement, LVM driver would load the topology keys from node labels and cache them and if someone modified the labels and missed to restart the driver pods, there could be an impact to volume scheduling. This enhancement requires users to specify the topology key via ENV allowing users to know the current key and if there is a change, requires for a ENV modification that will force a restart of all the drivers. - [NFS Provisioner](https://github.com/openebs/dynamic-nfs-provisioner) has been updated with several new features like: - Ability to configure the LeaseTime and GraceTime for the NFS server to tune the restart times - Added a prometheus metrics end point to report volume creation and failure events @@ -35,13 +35,20 @@ If you have any questions or need help with the migration please reach out to us - Allow specifying a different namespace than provisioner namespace to create NFS volume related objects. - Allow specifying the node affinity for NFS server deployment - [Rawfile Local PV](https://github.com/openebs/rawfile-localpv) has been enhanced to support xfs filesystem. - +- Enhanced Jiva and cStor CSI drivers to handle split brain condition that could cause the Kubelet to attach the volume on new node while still mounted on disconnected node. The CSI drivers have been enhanced to allow iSCSI login connection only from one node at any given time. ### Key Bug Fixes +- Fixed an issue in Jiva Volume where issue the jiva replica STS keeps crashing due to change in the cluster domain. +- Fixed an issue in Jiva Volume that was causing log flooding while fetching volume status using Service DNS. Switched to using controller IP. +- Fixed an issue in ZFS Local Volumes that was causing an intermittent crash of controller pod due erroneously accessing a variable. +- Fixed an issue in Device Local PV causing a crash due to a race condition between creating partition and clearing a partition. +- Several usability fixes to documentation and helm charts for various engines. + ### Backward Incompatibilities - Kubernetes 1.18 or higher release is recommended as this release uses features of Kubernetes that will not be compatible with older Kubernetes releases. +- Kubernetes 1.19.12 or higher is recommended for using Rawfile Local PV - OpenEBS has deprecated arch-specific container images in favor of multi-arch container images. For example, images like `cstor-pool-arm64:x.y.x` should be replaced with corresponding multi-arch image `cstor-pool:x.y.x`. ### Component versions From 855fea2369742c27c13142f1822faa2e0ecedf07 Mon Sep 17 00:00:00 2001 From: kmova Date: Sun, 18 Jul 2021 09:59:56 +0530 Subject: [PATCH 09/10] fix nitpick comments on release notes Signed-off-by: kmova --- docs/releases.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/releases.md b/docs/releases.md index 6cd50fb8e..b753b1c71 100644 --- a/docs/releases.md +++ b/docs/releases.md @@ -32,13 +32,13 @@ If you have any questions or need help with the migration please reach out to us - Ability to configure the LeaseTime and GraceTime for the NFS server to tune the restart times - Added a prometheus metrics end point to report volume creation and failure events - Added a configuration option to allow users to specify the GUID to set on the NFS server to allow non-root applications to access NFS share - - Allow specifying a different namespace than provisioner namespace to create NFS volume related objects. + - Allow specifying a different namespace than provisioner namespace to create NFS volume related objects - Allow specifying the node affinity for NFS server deployment - [Rawfile Local PV](https://github.com/openebs/rawfile-localpv) has been enhanced to support xfs filesystem. - Enhanced Jiva and cStor CSI drivers to handle split brain condition that could cause the Kubelet to attach the volume on new node while still mounted on disconnected node. The CSI drivers have been enhanced to allow iSCSI login connection only from one node at any given time. ### Key Bug Fixes -- Fixed an issue in Jiva Volume where issue the jiva replica STS keeps crashing due to change in the cluster domain. +- Fixed an issue in Jiva Volume Replica STS keeps crashing due to change in the cluster domain and failed attempts to access the controller. - Fixed an issue in Jiva Volume that was causing log flooding while fetching volume status using Service DNS. Switched to using controller IP. - Fixed an issue in ZFS Local Volumes that was causing an intermittent crash of controller pod due erroneously accessing a variable. - Fixed an issue in Device Local PV causing a crash due to a race condition between creating partition and clearing a partition. From 43e498ee65926751426d2f8f8e9a8756b803a13e Mon Sep 17 00:00:00 2001 From: kmova Date: Sun, 18 Jul 2021 11:43:28 +0530 Subject: [PATCH 10/10] update install instructions Signed-off-by: kmova --- docs/installation.md | 97 +++++++++++++++++++++----------------------- 1 file changed, 47 insertions(+), 50 deletions(-) diff --git a/docs/installation.md b/docs/installation.md index 31d593702..4f301a296 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -6,38 +6,24 @@ sidebar_label: Installation ------
-This guide will help you to customize and install OpenEBS. If this is your first time installing OpenEBS, make sure that your Kubernetes nodes are meet the [required prerequisites](/docs/next/prerequisites.html). +This guide will help you to customize and install OpenEBS. +## Prerequisites +If this is your first time installing OpenEBS, make sure that your Kubernetes nodes meet the [required prerequisites](/docs/next/prerequisites.html). At a high level OpenEBS requires: -## Verify that you have the admin context - -For installation of OpenEBS, cluster-admin user context is a must. OpenEBS installs service accounts and custom resource definitions that are only allowed for cluster administrators. - -Use the `kubectl auth can-i` commands to verify that you have the cluster-admin context. You can use the following commands to verify if you have access: - -``` -kubectl auth can-i 'create' 'namespace' -A -kubectl auth can-i 'create' 'crd' -A -kubectl auth can-i 'create' 'sa' -A -kubectl auth can-i 'create' 'clusterrole' -A -``` - -If you do not have admin permissions to your cluster, please check with your Kubernetes cluster administrator to help with installing OpenEBS or if you are the owner of the cluster, check out the steps to create a new admin context and use it for installing OpenEBS. +- Verify that you have the admin context. If you do not have admin permissions to your cluster, please check with your Kubernetes cluster administrator to help with installing OpenEBS or if you are the owner of the cluster, check out the steps to create a new admin context and use it for installing OpenEBS. +- You have Kubernetes 1.18 version or higher. +- Each storage engine may have few additional requirements like having: + - iSCSI initiator utils installed for Jiva and cStor volumes + - Depending on the managed Kubernetes platform like Rancher or MicroK8s - set up the right bind mounts + - Decide which of the devices on the nodes should be used by OpenEBS or if you need to create LVM Volume Groups or ZFS Pools +- Join [OpenEBS community on Kubernetes slack](docs/next/support.html). ## Installation through helm -Verify helm is installed and helm repo is updated. See helm docs for setting up helm v3. Installed helm version can be obtained by using the following command: - -``` -helm version -``` -Example output: - -
-version.BuildInfo{Version:"v3.6.0", GitCommit:"7f2df6467771a75f5646b7f12afb408590ed1755", GitTreeState:"clean", GoVersion:"go1.16.3"} -
+Verify helm is installed and helm repo is updated. You need helm 3.2 or more. Setup helm repository ``` @@ -45,16 +31,23 @@ helm repo add openebs https://openebs.github.io/charts helm repo update ``` -OpenEBS provides several options that you can customize during install like specifying the directory where hostpath volume data is stored, specifying the nodes on which OpenEBS components should be deployed, and so forth. The default OpenEBS helm chart will only install Local PV hostpath and Jiva data engines. +OpenEBS provides several options that you can customize during install like: +- specifying the directory where hostpath volume data is stored or +- specifying the nodes on which OpenEBS components should be deployed, and so forth. -Please refer to OpenEBS helm chart documentation for full list of customizable options and using cStor and other flavors of OpenEBS data engines by setting the correct helm values. +The default OpenEBS helm chart will only install Local PV hostpath and Jiva data engines. Please refer to OpenEBS helm chart documentation for full list of customizable options and using cStor and other flavors of OpenEBS data engines by setting the correct helm values. Install OpenEBS helm chart with default values. ``` helm install openebs --namespace openebs openebs/openebs --create-namespace ``` -The above commands will install OpenEBS in `openebs` namespace and chart name as `openebs` +The above commands will install OpenEBS Jiva and Local PV components in `openebs` namespace and chart name as `openebs`. To install and enable other engines you can modified the above command as follows: + +- cStor + ``` + helm install openebs --namespace openebs openebs/openebs --create-namespace --set cstor.enabled=true + ``` To view the chart ``` @@ -68,16 +61,25 @@ As a next step [verify](#verifying-openebs-installation) your installation and d OpenEBS provides a list of YAMLs that will allow you to easily customize and run OpenEBS in your Kubernetes cluster. For custom installation, download the **openebs-operator** YAML file, update the configurations and use the customized YAML for installation in the below `kubectl` command. -To continue with **default installation mode**, use the following command to install OpenEBS. OpenEBS is installed in `openebs` namespace. +To continue with default installation mode, use the following command to install OpenEBS. OpenEBS is installed in `openebs` namespace. ``` kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml ``` -The above command installs Jiva and Local PV components. You have to run additional YAMLs to enable other engines as follows: -- cStor (`https://openebs.github.io/charts/cstor-operator.yaml`) -- Local PV ZFS ( `kubectl apply -f https://openebs.github.io/charts/zfs-operator.yaml` ) -- Local PV LVM ( `kubectl apply -f https://openebs.github.io/charts/lvm-operator.yaml` ) +The above command installs Jiva and Local PV components. To install and enable other engines you will need to run additional command like: +- cStor + ``` + kubectl apply -f https://openebs.github.io/charts/cstor-operator.yaml + ``` +- Local PV ZFS + ``` + kubectl apply -f https://openebs.github.io/charts/zfs-operator.yaml + ``` +- Local PV LVM + ``` + kubectl apply -f https://openebs.github.io/charts/lvm-operator.yaml + ``` ## Verifying OpenEBS installation @@ -127,22 +129,6 @@ openebs-device openebs.io/local openebs-hostpath openebs.io/local 64s openebs-jiva-default openebs.io/provisioner-iscsi 64s openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 64s -standard (default) kubernetes.io/gce-pd 6m41s - - - - -**Verify Jiva default pool - default** - - -``` -kubectl get sp -``` - -Following is an example output. - -
NAME AGE -default 2m
@@ -150,7 +136,7 @@ default 2m
-For a testing your OpenEBS installation, you can use the below default storage classes +For testing your OpenEBS installation, you can use the below default storage classes - `openebs-jiva-default` for provisioning Jiva Volume (this uses `default` pool which means the data replicas are created in the /var/openebs/ directory of the Jiva replica pod) @@ -166,6 +152,17 @@ You can follow through the below user guides for each of the engines to use stor ### Set cluster-admin user context +For installation of OpenEBS, cluster-admin user context is a must. OpenEBS installs service accounts and custom resource definitions that are only allowed for cluster administrators. + +Use the `kubectl auth can-i` commands to verify that you have the cluster-admin context. You can use the following commands to verify if you have access: + +``` +kubectl auth can-i 'create' 'namespace' -A +kubectl auth can-i 'create' 'crd' -A +kubectl auth can-i 'create' 'sa' -A +kubectl auth can-i 'create' 'clusterrole' -A +``` + If there is no cluster-admin user context already present, create one and use it. Use the following command to create the new context. ```