Skip to content
This repository has been archived by the owner on Mar 3, 2022. It is now read-only.

Commit

Permalink
chore(build): add spellchecker github action (#928)
Browse files Browse the repository at this point in the history
Add a github action that can detect spell errors
and report on the PR.

For terms that are not part of the standard
dictionary, the new words can be added to one of the
following files:
- hack/cspell-contributors.txt - company and user names
- hack/cspell-ignore.txt - random text that is part of code output
- hack/cspell-words.txt - technology words


Signed-off-by: kmova <[email protected]>
  • Loading branch information
kmova committed May 16, 2021
1 parent a7f7239 commit 6281c9c
Show file tree
Hide file tree
Showing 39 changed files with 769 additions and 157 deletions.
29 changes: 29 additions & 0 deletions .github/workflows/markdown.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
name: Markdown Linter
on: [push, pull_request]
defaults:
run:
shell: bash

jobs:
#TODO: Need to push a baseline commit to fix existing issues
# linting:
# name: "Markdown linting"
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v2
# name: Check out the code
# - name: Lint Code Base
# uses: docker://avtodev/markdown-lint:v1
# with:
# args: "**/*.md"
spellchecking:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
name: Check out the code
- uses: actions/setup-node@v1
name: Run spell check
with:
node-version: "12"
- run: npm install -g cspell
- run: cspell --config ./cSpell.json "**/*.md"
4 changes: 2 additions & 2 deletions NOTICE.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
The source code developed for the OpenEBS Project is licensed under Apache 2.0.

However, the OpenEBS project contains unmodified/modified subcomponents from other Open Source Projects with separate copyright notices and license terms.
However, the OpenEBS project contains unmodified/modified sub components from other Open Source Projects with separate copyright notices and license terms.

Your use of the source code for these subcomponents is subject to the terms and conditions as defined by those source projects.
Your use of the source code for these sub components is subject to the terms and conditions as defined by those source projects.
45 changes: 45 additions & 0 deletions cSpell.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
{
"version": "0.2",
"language": "en-US",
"flagWords": [
"hte"
],
"ignorePaths": [
"*.lock",
"*.json",
"*.toml",
"*.conf",
"*.py",
"*.txt",
"*.sh",
"website",
"hack",
"Dockerfile"
],
"ignoreWords": [],
"dictionaryDefinitions": [
{
"name": "cspell-words",
"path": "./hack/cspell-words.txt"
},
{
"name": "cspell-contributors",
"path": "./hack/cspell-contributors.txt"
},
{
"name": "cspell-ignore",
"path": "./hack/cspell-ignore.txt"
}
],
"dictionaries": [
"cspell-words",
"cspell-contributors",
"cspell-ignore"
],
"languageSettings": [
{
"languageId": "*",
"dictionaries": []
}
]
}
4 changes: 2 additions & 2 deletions docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ The OpenEBS data plane is responsible for the actual volume IO path. A storage e

### Jiva

Jiva storage engine is developed with Rancher's LongHorn and gotgt as the base. Jiva engine is written in GO language and runs in the user space. LongHorn controller synchronously replicates the incoming IO to the LongHorn replicas. The replica considers a Linux sparse file as the foundation for building the storage features such as thin provisioning, snapshotting, rebuilding etc. More details on Jiva architecture are written [here](/docs/next/jiva.html).
Jiva storage engine is developed with Rancher's LongHorn and gotgt as the base. Jiva engine is written in GO language and runs in the user space. LongHorn controller synchronously replicates the incoming IO to the LongHorn replicas. The replica considers a Linux sparse file as the foundation for building the storage features such as thin provisioning, snapshots, rebuilding etc. More details on Jiva architecture are written [here](/docs/next/jiva.html).

### cStor

Expand Down Expand Up @@ -134,7 +134,7 @@ Prometheus is installed as a micro service by the OpenEBS operator during the in

### WeaveScope

WeaveScope is a well-regarded cloud-native visualisation solution in Kubernetes to view metrics, tags and metadata within the context of a process, container, service or host. Node Disk Manager components, volume pods, and other persistent storage structures of Kubernetes have been enabled for WeaveScope integration. With these enhancements, exploration and traversal of these components have become significantly easier.
WeaveScope is a well-regarded cloud-native visualization solution in Kubernetes to view metrics, tags and metadata within the context of a process, container, service or host. Node Disk Manager components, volume pods, and other persistent storage structures of Kubernetes have been enabled for WeaveScope integration. With these enhancements, exploration and traversal of these components have become significantly easier.



Expand Down
4 changes: 2 additions & 2 deletions docs/casengines.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Operators or administrators typically choose a storage engine with a specific so

OpenEBS provides three types of storage engines.

1. **Jiva** - Jiva is the first storage engine that was released in 0.1 version of OpenEBS and is the most simple to use. It is built in GoLang and uses LongHorn and gotgt stacks inside. Jiva runs entirely in user space and provides standard block storage capabilities such as synchronous replication. Jiva is suitable for smaller capacity workloads in general and not suitable when extensive snapshotting and cloning features are a major need. Read more details of Jiva [here](/docs/next/jiva.html)
1. **Jiva** - Jiva is the first storage engine that was released in 0.1 version of OpenEBS and is the most simple to use. It is built in GoLang and uses LongHorn and gotgt stacks inside. Jiva runs entirely in user space and provides standard block storage capabilities such as synchronous replication. Jiva is suitable for smaller capacity workloads in general and not suitable when extensive snapshots and cloning features are a major need. Read more details of Jiva [here](/docs/next/jiva.html)

2. **cStor** - cStor is the most recently released storage engine, which became available from 0.7 version of OpenEBS. cStor is very robust, provides data consistency and supports enterprise storage features like snapshots and clones very well. It also comes with a robust storage pool feature for comprehensive storage management both in terms of capacity and performance. Together with NDM (Node Disk Manager), cStor provides complete set of persistent storage features for stateful applications on Kubernetes. Read more details of cStor [here](/docs/next/cstor.html)

Expand Down Expand Up @@ -226,7 +226,7 @@ A short summary is provided below.

### [Jiva User Guide](/docs/next/jivaguide.html)

### [Local PV Hospath User Guide](/docs/next/uglocalpv-hostpath.html)
### [Local PV Hostpath User Guide](/docs/next/uglocalpv-hostpath.html)

### [Local PV Device User Guide](/docs/next/uglocalpv-device.html)

Expand Down
2 changes: 1 addition & 1 deletion docs/cassandra.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ sidebar_label: Cassandra
<img src="/docs/assets/o-cassandra.png" alt="OpenEBS and Cassandra" style="width:400px;">


This tutorial provides detailed instructions to run a Kudo operator based Cassandra StatefulsSets with OpenEBS storage and perform some simple database operations to verify the successful deployment and it's performance benchmark.
This tutorial provides detailed instructions to run a Kudo operator based Cassandra StatefulSets with OpenEBS storage and perform some simple database operations to verify the successful deployment and it's performance benchmark.

## Introduction

Expand Down
6 changes: 3 additions & 3 deletions docs/cstor.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ cStor Pool is an important component in the storage management. It is fundamenta

**Add a new pool instance** : A new pool instance may need to be added for many different reasons. The steps for expanding a cStor pool to a new node can be found [here](/docs/next/ugcstor.html#expanding-cStor-pool-to-a-new-node). Few example scenarios where need of cStor pool expansion to new nodes are:

- New node is being added to the Kubernetes cluster and the blockedvices in new node needs to be considered for persistent volume storage.
- New node is being added to the Kubernetes cluster and the blockdevices in new node needs to be considered for persistent volume storage.
- An existing pool instance is full in capacity and it cannot be expanded as either local disks or network disks are not available. Hence, a new pool instance may be needed for hosting the new volume replicas.
- An existing pool instance is fully utilized in performance and it cannot be expanded either because CPU is saturated or more local disks are not available or more network disks or not available. A new pool instance may be added and move some of the existing volumes to the new pool instance to free up some disk IOs on this instance.

Expand Down Expand Up @@ -304,7 +304,7 @@ Following are most commonly observed areas of troubleshooting

The cause of high memory consumption of Kubelet is seen on Fedora 29 mainly due to the following.

There are 3 modules are involved - `cstor-isgt`, `kubelet` and `iscsiInitiator(iscsiadm)`.
There are 3 modules are involved - `cstor-istgt`, `kubelet` and `iscsiInitiator(iscsiadm)`.
kubelet runs iscsiadm command to do discovery on cstor-istgt. If there is any delay in receiving response of discovery opcode (either due to network or delay in processing on target side), iscsiadm retries few times and gets into infinite loop dumping error messages as below:

iscsiadm: Connection to Discovery Address 127.0.0.1 failed
Expand Down Expand Up @@ -339,7 +339,7 @@ This issue is fixed in 0.8.1 version.
| <font size="5">cStor volume features</font> | |
| Expanding the size of a cStor volume using CSI provisioner (Alpha) | 1.2.0 |
| CSI driver support(Alpha) | 1.1.0 |
| Snapshot and Clone of cStor volume provisoned via CSI provisioner(Alpha) | 1.4.0 |
| Snapshot and Clone of cStor volume provisioned via CSI provisioner(Alpha) | 1.4.0 |
| Scaling up of cStor Volume Replica | 1.3.0 |
| Scaling down of cStor Volume Replica | 1.4.0 |

Expand Down
14 changes: 7 additions & 7 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ sidebar_label: FAQs

[What is the default OpenEBS Reclaim policy?](#default-reclaim-policy)

[Why NDM daemon set required privileged mode?](#why-ndm-priviledged)
[Why NDM daemon set required privileged mode?](#why-ndm-privileged)

[Is OpenShift supported?](#openebs-in-openshift)

Expand All @@ -36,7 +36,7 @@ sidebar_label: FAQs

[How backup and restore is working with OpenEBS volumes?](#backup-restore-openebs-volumes)

[Why customized parameters set on default OpenEBS StorageClasses are not getting persisted?](#customized-values-not-peristed-after-reboot)
[Why customized parameters set on default OpenEBS StorageClasses are not getting persisted?](#customized-values-not-persisted-after-reboot)

[Why NDM listens on host network?](#why-ndm-listens-on-host-network)

Expand Down Expand Up @@ -76,7 +76,7 @@ sidebar_label: FAQs

[How OpenEBS detects disks for creating cStor Pool?](#how-openebs-detects-disks-for-creating-cstor-pool)

[Can I provision OpenEBS volume if the request in PVC is more than the available physical capacity of the pools in the Storage Nodes?](#provision-pvc-higher-than-physical-sapce)
[Can I provision OpenEBS volume if the request in PVC is more than the available physical capacity of the pools in the Storage Nodes?](#provision-pvc-higher-than-physical-space)

[What is the difference between cStor Pool creation using manual method and auto method?](#what-is-the-difference-between-cstor-pool-creation-using-manual-method-and-auto-method)

Expand Down Expand Up @@ -135,7 +135,7 @@ To determine exactly where your data is physically stored, you can run the follo
The output displays the following pods.

```
IO Controller: pvc-ee171da3-07d5-11e8-a5be-42010a8001be-ctrl-6798475d8c-7dcqd
IO Controller: pvc-ee171da3-07d5-11e8-a5be-42010a8001be-ctrl-6798475d8c-7node
Replica 1: pvc-ee171da3-07d5-11e8-a5be-42010a8001be-rep-86f8b8c758-hls6s
Replica 2: pvc-ee171da3-07d5-11e8-a5be-42010a8001be-rep-86f8b8c758-tr28f
```
Expand Down Expand Up @@ -191,7 +191,7 @@ For jiva, from 0.8.0 version, the data is deleted via scrub jobs. The completed



<h3><a class="anchor" aria-hidden="true" id="why-ndm-priviledged"></a>Why NDM Daemon set required privileged mode?</h3>
<h3><a class="anchor" aria-hidden="true" id="why-ndm-privileged"></a>Why NDM Daemon set required privileged mode?</h3>


Currently, NDM Daemon set runs in the privileged mode. NDM requires privileged mode because it requires access to `/dev` and `/sys` directories for monitoring the devices attached and also to fetch the details of the attached device using various probes.
Expand Down Expand Up @@ -314,7 +314,7 @@ OpenEBS cStor volume is working based on cStor/ZFS snapshot using Velero. For Op



<h3><a class="anchor" aria-hidden="true" id="customized-values-not-peristed-after-reboot"></a>Why customized parameters set on default OpenEBS StorageClasses are not getting persisted?</h3>
<h3><a class="anchor" aria-hidden="true" id="customized-values-not-persisted-after-reboot"></a>Why customized parameters set on default OpenEBS StorageClasses are not getting persisted?</h3>


The customized parameters set on default OpenEBS StorageClasses will not persist after restarting `maya-apiserver` pod or restarting the node where `maya-apiserver` pod is running. StorageClasses created by maya-apiserver are owned by it and it tries to overwrite them upon its creation.
Expand Down Expand Up @@ -552,7 +552,7 @@ It is also possible to customize by adding more disk types associated with your



<h3><a class="anchor" aria-hidden="true" id="provision-pvc-higher-than-physical-sapce"></a> Can I provision OpenEBS volume if the request in PVC is more than the available physical capacity of the pools in the Storage Nodes?</h3>
<h3><a class="anchor" aria-hidden="true" id="provision-pvc-higher-than-physical-space"></a> Can I provision OpenEBS volume if the request in PVC is more than the available physical capacity of the pools in the Storage Nodes?</h3>


As of 0.8.0, the user is allowed to create PVCs that cross the available capacity of the pools in the Nodes. In the future release, it will validate with an option `overProvisioning=false`, the PVC request should be denied if there is not enough available capacity to provision the volume.
Expand Down
10 changes: 5 additions & 5 deletions docs/features.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ sidebar_label: Features and Benefits
| [Containerized storage for containers](#containerized-storage-for-containers) | [Granular policies per stateful workload](#granular-policies-per-stateful-workload) |
| [Synchronous replication](#synchronous-replication) | [Avoid Cloud Lock-in](#avoid-cloud-lock-in) |
| [Snapshots and clones](#snapshots-and-clones) | [Reduced storage TCO up to 50%](#reduced-storage-tco-up-to-50) |
| [Backup and restore](#backup-and-restore) | [Native HCI on Kubernetes](#natively-hyperconvergenced-on-kubernetes) |
| [Backup and restore](#backup-and-restore) | [Native HCI on Kubernetes](#natively-hyperconverged-on-kubernetes) |
| [Prometheus metrics and Grafana dashboard](#prometheus-metrics-for-workload-tuning) | [High availability - No Blast Radius](#high-availability) |


Expand Down Expand Up @@ -59,7 +59,7 @@ Copy-on-write snapshots are another optional and popular feature of OpenEBS. Whe

<img src="/docs/assets/svg/f-backup.svg" alt="Backup and Restore Icon" style="width:200px;">

The backup and restore of OpenEBS volumes works with Kubernetes backup and restore solutions such as Velero (formerly Heptio Ark) via open source OpenEBS Velero-plugins. Data backup to object storage targets such as AWS S3, GCP Object Storage or MinIO are frequently deployed using the OpenEBS incremental snapshot capability. This storage level snapshotting and backup saves a significant amount of bandwidth and storage space as only incremental data is used for backup.
The backup and restore of OpenEBS volumes works with Kubernetes backup and restore solutions such as Velero (formerly Heptio Ark) via open source OpenEBS Velero-plugins. Data backup to object storage targets such as AWS S3, GCP Object Storage or MinIO are frequently deployed using the OpenEBS incremental snapshot capability. This storage level snapshot and backup saves a significant amount of bandwidth and storage space as only incremental data is used for backup.

<hr>

Expand Down Expand Up @@ -98,9 +98,9 @@ OpenEBS is cloud native storage for stateful applications on Kubernetes where "c

### Avoid Cloud Lock-in

<img src="/docs/assets/svg/b-no-lockin.svg" alt="Avoid Cloud Lockin Icon" style="width:200px;">
<img src="/docs/assets/svg/b-no-lockin.svg" alt="Avoid Cloud Lock-in Icon" style="width:200px;">

Even though Kubernetes provides an increasingly ubiquitous control plane, concerns about data gravity resulting in lock-in and otherwise inhibiting the benefits of Kubernetes remain. With OpenEBS, the data can be written to the OpenEBS layer - if cStor, Jiva or Mayastor are used - and if so OpenEBS acts as a data abstraction layer. Using this data abstraction layer, data can be much more easily moved amongst Kubernetes enviroments, whether they are on premise and attached to traditional storage systems or in the cloud and attached to local storage or managed storage services.
Even though Kubernetes provides an increasingly ubiquitous control plane, concerns about data gravity resulting in lock-in and otherwise inhibiting the benefits of Kubernetes remain. With OpenEBS, the data can be written to the OpenEBS layer - if cStor, Jiva or Mayastor are used - and if so OpenEBS acts as a data abstraction layer. Using this data abstraction layer, data can be much more easily moved amongst Kubernetes environments, whether they are on premise and attached to traditional storage systems or in the cloud and attached to local storage or managed storage services.

<hr>

Expand Down Expand Up @@ -131,7 +131,7 @@ On most clouds, block storage is charged based on how much is purchased and not

<br>

### Natively Hyperconvergenced on Kubernetes
### Natively Hyperconverged on Kubernetes

<img src="/docs/assets/svg/b-hci.svg" alt="Natively HCI on K8s Icon" style="width:200px;">

Expand Down
10 changes: 5 additions & 5 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Installed helm version can be obtained by using the following command:
```
helm version
```
Example ouptut:
Example output:

<div class="co">
Client: &version.Version{SemVer:"v2.16.8", GitCommit:"145206680c1d5c28e3fcf30d6f596f0ba84fcb47", GitTreeState:"clean"}
Expand All @@ -112,13 +112,13 @@ See [helm docs](https://helm.sh/docs/intro/install/#from-script) for setting up
```
helm version
```
Example ouptut:
Example output:

<div class="co">
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
</div>

OpenEBS instalaltion with helm v3 can be done by 2 ways:
OpenEBS installation with helm v3 can be done by 2 ways:

**Option 1:** Helm v3 takes the current namespace from the local kube config and use that namespace the next time the user executes helm commands. If it is not present, the default namespace is used. Assign the `openebs` namespace to the current context and run the following commands to install openebs in `openebs` namespace.

Expand Down Expand Up @@ -162,7 +162,7 @@ To view the chart
```
helm ls -n openebs
```
The above commans will install OpenEBS in `openebs` namespace and chart name as `openebs`
The above commands will install OpenEBS in `openebs` namespace and chart name as `openebs`


**Note:**
Expand Down Expand Up @@ -676,7 +676,7 @@ metadata:
namespace: openebs
data:
# udev-probe is default or primary probe which should be enabled to run ndm
# filterconfigs contails configs of filters - in their form fo include
# filterconfigs contains configs of filters - in their form fo include
# and exclude comma separated strings
node-disk-manager.config: |
probeconfigs:
Expand Down
Loading

0 comments on commit 6281c9c

Please sign in to comment.