This repository has been archived by the owner on Mar 3, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 137
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
refactor(docs): rearrange content in intro section better readability (…
…#940) * added an overall architecture diagram * pushed the examples to use cases * split the features and benefits to specific pages * split the community and commercial support pages Signed-off-by: kmova <[email protected]>
- Loading branch information
Showing
9 changed files
with
296 additions
and
271 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,106 @@ | ||
--- | ||
id: benefits | ||
title: OpenEBS Benefits | ||
sidebar_label: Benefits | ||
--- | ||
|
||
------ | ||
|
||
<br> | ||
- [Open Source Cloud Native storage for Kubernetes](#open-source-cloud-native-storage-for-kubernetes) | ||
- [Granular policies per stateful workload](#granular-policies-per-stateful-workload) | | ||
- [Avoid Cloud Lock-in](#avoid-cloud-lock-in) | | ||
- [Reduced storage TCO up to 50%](#reduced-storage-tco-up-to-50) | | ||
- [Native HCI on Kubernetes](#natively-hyperconverged-on-kubernetes) | | ||
- [High availability - No Blast Radius](#high-availability) | | ||
<br> | ||
|
||
:::tip | ||
For information on how OpenEBS is used in production, visit the [use cases](/docs/next/usecases.html) section or see examples of users and their stories on the OpenEBS Adopters section [here](https://github.com/openebs/openebs/blob/master/ADOPTERS.md) where you can also share your experience. | ||
::: | ||
|
||
### Open Source Cloud Native Storage for Kubernetes | ||
|
||
<img src="/docs/assets/svg/b-cn.svg" alt="Cloud Native Storage Icon" style="width:200px;" align="right"> | ||
OpenEBS is cloud native storage for stateful applications on Kubernetes where "cloud native" means following a loosely coupled architecture. As such the normal benefits to cloud native, loosely coupled architectures apply. For example, developers and DevOps architects can use standard Kubernetes skills and utilities to configure, use, and manage the persistent storage needs. | ||
|
||
Some key aspects that make OpenEBS different compared to other traditional storage solutions: | ||
- Built using the _micro-services architecture_ like the applications it serves. OpenEBS is itself deployed as a set of containers on Kubernetes worker nodes. Uses Kubernetes itself to orchestrate and manage OpenEBS components. | ||
- Built completely in userspace making it highly portable to run across _any OS/platform_. | ||
- Completely intent-driven, inheriting the same principles that drive the _ease of use_ with Kubernetes. | ||
- OpenEBS supports a range of storage engines so that developers can deploy the storage technology appropriate to their application design objectives. Distributed applications like Cassandra can use the _LocalPV_ engine for lowest latency writes. Monolithic applications like MySQL and PostgreSQL can use _Mayastor built using NVMe and SPDK_ or _cStor based on ZFS_ for resilience. Streaming applications like Kafka can use the NVMe engine [Mayastor](https://github.com/openebs/Mayastor) for best performance in edge environments. | ||
|
||
|
||
<br> | ||
<br> | ||
<hr> | ||
|
||
### Avoid Cloud Lock-in | ||
|
||
<img src="/docs/assets/svg/b-no-lockin.svg" alt="Avoid Cloud Lock-in Icon" style="width:200px;" align="right"> | ||
|
||
Even though Kubernetes provides an increasingly ubiquitous control plane, concerns about data gravity resulting in lock-in and otherwise inhibiting the benefits of Kubernetes remain. With OpenEBS, the data can be written to the OpenEBS layer - if cStor, Jiva or Mayastor are used - and if so OpenEBS acts as a data abstraction layer. Using this data abstraction layer, data can be much more easily moved amongst Kubernetes environments, whether they are on premise and attached to traditional storage systems or in the cloud and attached to local storage or managed storage services. | ||
|
||
<br> | ||
<br> | ||
<hr> | ||
|
||
|
||
### Granular Policies Per Stateful Workload | ||
|
||
<img src="/docs/assets/svg/b-granular.svg" alt="Workload Granular Policies Icon" style="width:200px;" align="right"> | ||
|
||
One reason for the rise of cloud native, loosely coupled architectures is that they enable loosely coupled teams. These small teams are enabled by cloud native architectures to move faster, free of most cross functional dependencies thereby unlocking innovation and customer responsiveness. OpenEBS also unlocks small teams by enabling them to retain their autonomy by virtue of deploying their own storage system. Practically, this means storage parameters are monitored on a workload and per volume basis and storage policies and settings are declared to achieve the desired result for a given workload. The policies are tested and tuned, keeping only the particular workload in mind, while other workloads are unaffected. Workloads - and teams - remain loosely coupled. | ||
|
||
<br> | ||
<br> | ||
<hr> | ||
|
||
### Reduced Storage TCO up to 50% | ||
|
||
<img src="/docs/assets/svg/b-lowtco.svg" alt="Reduced Storage TCO Icon" style="width:200px;" align="right"> | ||
|
||
On most clouds, block storage is charged based on how much is purchased and not on how much is used; capacity is often over provisioned in order to achieve higher performance and in order to remove the risk of disruption when the capacity is fully utilized. Thin provisioning capabilities of OpenEBS can pool local storage or cloud storage and then grow the data volumes of stateful applications as needed. The storage can be added on the fly without disruption to the volumes exposed to the workloads or applications. Certain users have reported savings in excess of 60% due to the use of thin provisioning from OpenEBS. | ||
|
||
<br> | ||
<br> | ||
<hr> | ||
|
||
|
||
### Natively Hyperconverged on Kubernetes | ||
|
||
<img src="/docs/assets/svg/b-hci.svg" alt="Natively HCI on K8s Icon" style="width:200px;" align="right"> | ||
|
||
Node Disk Manager (NDM) in OpenEBS can be used to enable disk management in a Kubernetes way by using Kubernetes constructs. Using NDM and OpenEBS, nodes in the Kubernetes cluster can be horizontally scaled without worrying about managing persistent storage needs of stateful applications. The storage needs (capacity planning, performance planning, and volume management) of a cluster can be automated using the volume and pool policies of OpenEBS thanks in part to the role played by NDM in identifying and managing underlying storage resources, including local disks and cloud volumes. | ||
|
||
<br> | ||
<br> | ||
<hr> | ||
|
||
|
||
### High Availability | ||
|
||
<img src="/docs/assets/svg/b-ha.svg" alt="High Availability Icon" style="width:200px;" align="right"> | ||
|
||
Because OpenEBS follows the CAS architecture, upon node failure the OpenEBS controller will be rescheduled by Kubernetes while the underlying data is protected via the use of one or more replicas. More importantly - because each workload can utilize its own OpenEBS - there is no risk of a system wide outage due to the loss of storage. For example, metadata of the volume is not centralized where it might be subject to a catastrophic generalized outage as is the case in many shared storage systems. Rather the metadata is kept local to the volume. Losing any node results in the loss of volume replicas present only on that node. As the volume data is synchronously replicated at least on two other nodes, in the event of a node failure, the data continues to be available at the same performance levels. | ||
|
||
|
||
|
||
|
||
<br> | ||
<br> | ||
<hr> | ||
|
||
## See Also: | ||
|
||
### [OpenEBS Features](/docs/next/features.html) | ||
|
||
### [Object Storage with OpenEBS](/docs/next/minio.html) | ||
|
||
### [RWM PVs with OpenEBS](/docs/next/rwm.html) | ||
|
||
### [Local storage for Prometheus ](/docs/next/prometheus.html) | ||
|
||
<br> | ||
<hr> | ||
<br> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
--- | ||
id: commercial | ||
title: OpenEBS Commercial Support | ||
sidebar_label: Commercial Support | ||
--- | ||
------ | ||
|
||
|
||
OpenEBS is an independent open source project which does not endorse any company. | ||
|
||
This is a list of third-party companies and individuals who provide products or services related to OpenEBS. If you are providing commercial support for OpenEBS, please [edit this page](https://github.com/openebs/openebs-docs/edit/staging/docs/commercial.md) to add yourself or your organization to the list. | ||
|
||
The list is provided in alphabetical order. | ||
|
||
- [Clouds Sky GmbH](https://cloudssky.com/en/) | ||
- [CodeWave](https://codewave.eu/) | ||
- [Gridworkz Cloud Services](https://www.gridworkz.com/) | ||
- [MayaData](https://mayadata.io/) | ||
|
||
<hr> | ||
<br> | ||
|
||
## See Also: | ||
|
||
### [Community Support](/docs/next/support.html) | ||
|
||
### [Troubleshooting](/docs/next/troubleshooting.html) | ||
|
||
### [FAQs](/docs/next/faq.html) | ||
|
||
### [Latest release notes](/docs/next/releases.html) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,172 +1,84 @@ | ||
--- | ||
id: features | ||
title: OpenEBS Features and Benefits | ||
sidebar_label: Features and Benefits | ||
title: OpenEBS Features | ||
sidebar_label: Features | ||
--- | ||
|
||
------ | ||
|
||
| <font size="3">OpenEBS Features</font> | <font size="3">OpenEBS Benefits</font> | | ||
|--------------------|--------------------| | ||
| [Containerized storage for containers](#containerized-storage-for-containers) | [Granular policies per stateful workload](#granular-policies-per-stateful-workload) | | ||
| [Synchronous replication](#synchronous-replication) | [Avoid Cloud Lock-in](#avoid-cloud-lock-in) | | ||
| [Snapshots and clones](#snapshots-and-clones) | [Reduced storage TCO up to 50%](#reduced-storage-tco-up-to-50) | | ||
| [Backup and restore](#backup-and-restore) | [Native HCI on Kubernetes](#natively-hyperconverged-on-kubernetes) | | ||
| [Prometheus metrics and Grafana dashboard](#prometheus-metrics-for-workload-tuning) | [High availability - No Blast Radius](#high-availability) | | ||
|
||
|
||
For more information on how OpenEBS is used in cloud native environments, visit the [use cases](/docs/next/usecases.html) section or see examples of users and their stories on the OpenEBS Adopters section [here](https://github.com/openebs/openebs/blob/master/ADOPTERS.md) where you can also share your experience. | ||
|
||
|
||
<br> | ||
- [Containerized storage for containers](#containerized-storage-for-containers) | ||
- [Synchronous replication](#synchronous-replication) | ||
- [Snapshots and clones](#snapshots-and-clones) | ||
- [Backup and restore](#backup-and-restore) | ||
- [Prometheus metrics and Grafana dashboard](#prometheus-metrics-for-workload-tuning) | ||
<br> | ||
|
||
## OpenEBS Features | ||
:::tip | ||
For information on how OpenEBS is used in production, visit the [use cases](/docs/next/usecases.html) section or see examples of users and their stories on the OpenEBS Adopters section [here](https://github.com/openebs/openebs/blob/master/ADOPTERS.md) where you can also share your experience. | ||
::: | ||
|
||
<br> | ||
|
||
### Containerized Storage for Containers | ||
|
||
<img src="/docs/assets/svg/f-cas.svg" alt="Containerized Storage Icon" style="width:200px;"> | ||
<img src="/docs/assets/svg/f-cas.svg" alt="Containerized Storage Icon" style="width:200px;" align="right"> | ||
|
||
OpenEBS is an example of Container Attached Storage or CAS. Volumes provisioned through OpenEBS are always containerized. Each volume has a dedicated storage controller that increases the agility and granularity of persistent storage operations of the stateful applications. Benefits and more details on CAS architecture are found <a href="/docs/next/cas.html" target="_blank">here</a>. | ||
|
||
<hr> | ||
|
||
<br> | ||
<br> | ||
<hr> | ||
|
||
### Synchronous Replication | ||
|
||
<img src="/docs/assets/svg/f-replication.svg" alt="Synchronous Replication Icon" style="width:200px;"> | ||
<img src="/docs/assets/svg/f-replication.svg" alt="Synchronous Replication Icon" style="width:200px;" align="right"> | ||
Synchronous Replication is an optional and popular feature of OpenEBS. When used with the Jiva, cStor and Mayastor storage engines, OpenEBS can synchronously replicate the data volumes for high availability. The replication happens across Kubernetes zones resulting in high availability for cross AZ setups. This feature is especially useful to build highly available stateful applications using local disks on cloud providers services such as GKE, EKS and AKS. | ||
|
||
<hr> | ||
|
||
|
||
|
||
<br> | ||
<br> | ||
<hr> | ||
|
||
### Snapshots and Clones | ||
|
||
<img src="/docs/assets/svg/f-snapshots.svg" alt="Snapshots and Clones Icon" style="width:200px;"> | ||
<img src="/docs/assets/svg/f-snapshots.svg" alt="Snapshots and Clones Icon" style="width:200px;" align="right"> | ||
|
||
Copy-on-write snapshots are another optional and popular feature of OpenEBS. When using the cStor engine, snapshots are created instantaneously and there is no limit on the number of snapshots. The incremental snapshot capability enhances data migration and portability across Kubernetes clusters and across different cloud providers or data centers. Operations on snapshots and clones are performed in completely Kubernetes native method using the standard kubectl commands. Common use cases include efficient replication for back-ups and the use of clones for troubleshooting or development against a read only copy of data. | ||
|
||
<hr> | ||
|
||
<br> | ||
|
||
### Backup and Restore | ||
|
||
<img src="/docs/assets/svg/f-backup.svg" alt="Backup and Restore Icon" style="width:200px;"> | ||
|
||
The backup and restore of OpenEBS volumes works with Kubernetes backup and restore solutions such as Velero (formerly Heptio Ark) via open source OpenEBS Velero-plugins. Data backup to object storage targets such as AWS S3, GCP Object Storage or MinIO are frequently deployed using the OpenEBS incremental snapshot capability. This storage level snapshot and backup saves a significant amount of bandwidth and storage space as only incremental data is used for backup. | ||
|
||
<hr> | ||
|
||
<br> | ||
|
||
### Prometheus Metrics for Workload Tuning | ||
|
||
<img src="/docs/assets/svg/f-prometheus.svg" alt="Prometheus and Tuning Icon" style="width:200px;"> | ||
|
||
OpenEBS volumes are instrumented for granular data metrics such as volume IOPS, throughput, latency and data patterns. As OpenEBS follows the CAS pattern, stateful applications can be tuned for better performance by observing the traffic data patterns on Prometheus and modifying the storage policy parameters without worrying about neighboring workloads that are using OpenEBS thereby minimizing the incidence of "noisy neighbor" issues. | ||
|
||
<hr> | ||
<br> | ||
|
||
<br> | ||
|
||
<br> | ||
|
||
|
||
|
||
|
||
### Backup and Restore | ||
|
||
## OpenEBS Benefits | ||
|
||
|
||
|
||
### Truly Cloud Native Storage for Kubernetes | ||
|
||
<img src="/docs/assets/svg/b-cn.svg" alt="Cloud Native Storage Icon" style="width:200px;"> | ||
OpenEBS is cloud native storage for stateful applications on Kubernetes where "cloud native" means following a loosely coupled architecture. As such the normal benefits to cloud native, loosely coupled architectures apply. For example, developers and DevOps architects can use standard Kubernetes skills and utilities to configure, use, and manage the persistent storage needs. | ||
<img src="/docs/assets/svg/f-backup.svg" alt="Backup and Restore Icon" style="width:200px;" align="right"> | ||
|
||
<hr> | ||
The backup and restore of OpenEBS volumes works with Kubernetes backup and restore solutions such as Velero (formerly Heptio Ark) via open source OpenEBS Velero-plugins. Data backup to object storage targets such as AWS S3, GCP Object Storage or MinIO are frequently deployed using the OpenEBS incremental snapshot capability. This storage level snapshot and backup saves a significant amount of bandwidth and storage space as only incremental data is used for backup. | ||
|
||
<br> | ||
|
||
|
||
### Avoid Cloud Lock-in | ||
|
||
<img src="/docs/assets/svg/b-no-lockin.svg" alt="Avoid Cloud Lock-in Icon" style="width:200px;"> | ||
|
||
Even though Kubernetes provides an increasingly ubiquitous control plane, concerns about data gravity resulting in lock-in and otherwise inhibiting the benefits of Kubernetes remain. With OpenEBS, the data can be written to the OpenEBS layer - if cStor, Jiva or Mayastor are used - and if so OpenEBS acts as a data abstraction layer. Using this data abstraction layer, data can be much more easily moved amongst Kubernetes environments, whether they are on premise and attached to traditional storage systems or in the cloud and attached to local storage or managed storage services. | ||
|
||
<hr> | ||
|
||
<br> | ||
|
||
|
||
|
||
### Granular Policies Per Stateful Workload | ||
|
||
<img src="/docs/assets/svg/b-granular.svg" alt="Workload Granular Policies Icon" style="width:200px;"> | ||
|
||
One reason for the rise of cloud native, loosely coupled architectures is that they enable loosely coupled teams. These small teams are enabled by cloud native architectures to move faster, free of most cross functional dependencies thereby unlocking innovation and customer responsiveness. OpenEBS also unlocks small teams by enabling them to retain their autonomy by virtue of deploying their own storage system. Practically, this means storage parameters are monitored on a workload and per volume basis and storage policies and settings are declared to achieve the desired result for a given workload. The policies are tested and tuned, keeping only the particular workload in mind, while other workloads are unaffected. Workloads - and teams - remain loosely coupled. | ||
|
||
<hr> | ||
|
||
<br> | ||
|
||
|
||
|
||
|
||
### Reduced Storage TCO up to 50% | ||
|
||
<img src="/docs/assets/svg/b-lowtco.svg" alt="Reduced Storage TCO Icon" style="width:200px;"> | ||
### Prometheus Metrics for Workload Tuning | ||
|
||
On most clouds, block storage is charged based on how much is purchased and not on how much is used; capacity is often over provisioned in order to achieve higher performance and in order to remove the risk of disruption when the capacity is fully utilized. Thin provisioning capabilities of OpenEBS can pool local storage or cloud storage and then grow the data volumes of stateful applications as needed. The storage can be added on the fly without disruption to the volumes exposed to the workloads or applications. Certain users have reported savings in excess of 60% due to the use of thin provisioning from OpenEBS. | ||
<img src="/docs/assets/svg/f-prometheus.svg" alt="Prometheus and Tuning Icon" style="width:200px;" align="right"> | ||
|
||
<hr> | ||
OpenEBS volumes are instrumented for granular data metrics such as volume IOPS, throughput, latency and data patterns. As OpenEBS follows the CAS pattern, stateful applications can be tuned for better performance by observing the traffic data patterns on Prometheus and modifying the storage policy parameters without worrying about neighboring workloads that are using OpenEBS thereby minimizing the incidence of "noisy neighbor" issues. | ||
|
||
<br> | ||
|
||
### Natively Hyperconverged on Kubernetes | ||
|
||
<img src="/docs/assets/svg/b-hci.svg" alt="Natively HCI on K8s Icon" style="width:200px;"> | ||
|
||
Node Disk Manager (NDM) in OpenEBS can be used to enable disk management in a Kubernetes way by using Kubernetes constructs. Using NDM and OpenEBS, nodes in the Kubernetes cluster can be horizontally scaled without worrying about managing persistent storage needs of stateful applications. The storage needs (capacity planning, performance planning, and volume management) of a cluster can be automated using the volume and pool policies of OpenEBS thanks in part to the role played by NDM in identifying and managing underlying storage resources, including local disks and cloud volumes. | ||
|
||
<hr> | ||
<br> | ||
|
||
|
||
### High Availability | ||
|
||
<img src="/docs/assets/svg/b-ha.svg" alt="High Availability Icon" style="width:200px;"> | ||
|
||
Because OpenEBS follows the CAS architecture, upon node failure the OpenEBS controller will be rescheduled by Kubernetes while the underlying data is protected via the use of one or more replicas. More importantly - because each workload can utilize its own OpenEBS - there is no risk of a system wide outage due to the loss of storage. For example, metadata of the volume is not centralized where it might be subject to a catastrophic generalized outage as is the case in many shared storage systems. Rather the metadata is kept local to the volume. Losing any node results in the loss of volume replicas present only on that node. As the volume data is synchronously replicated at least on two other nodes, in the event of a node failure, the data continues to be available at the same performance levels. | ||
|
||
|
||
<hr> | ||
|
||
|
||
<br> | ||
|
||
|
||
|
||
<br> | ||
|
||
## See Also: | ||
|
||
### [OpenEBS Benefits](/docs/next/benefits.html) | ||
|
||
### [Object Storage with OpenEBS](/docs/next/minio.html) | ||
|
||
### [RWM PVs with OpenEBS](/docs/next/rwm.html) | ||
### [Read Write Many (RWX) PVs with OpenEBS](/docs/next/rwm.html) | ||
|
||
### [Local storage for Prometheus ](/docs/next/prometheus.html) | ||
|
||
<br> | ||
|
||
<hr> | ||
|
||
<br> |
Oops, something went wrong.