From 9ac94ea14d3d5dad341fd29905bf031f21d6811d Mon Sep 17 00:00:00 2001 From: kmova Date: Thu, 22 Jul 2021 18:01:13 +0530 Subject: [PATCH 1/4] fix(mayastor): move mayastor content to gitbook Mayastor documentation is being updated with the latest changes at https://mayastor.gitbook.io/introduction/. This commit removes the duplicated (and outdated) content from the openebs docs. The links to the mayastor user guide are redirected to mayastor gitbook. Signed-off-by: kmova --- docs/architecture.md | 2 +- docs/casengines.md | 4 +- docs/mayastor-concept.md | 171 ----------------- docs/mayastor.md | 396 ++------------------------------------- docs/overview.md | 2 +- docs/quickstart.md | 2 +- docs/releases.md | 6 +- docs/ugmayastor.md | 11 ++ website/sidebars.json | 1 - 9 files changed, 38 insertions(+), 557 deletions(-) delete mode 100644 docs/mayastor-concept.md create mode 100644 docs/ugmayastor.md diff --git a/docs/architecture.md b/docs/architecture.md index cbd664f0e..4e4676b46 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -167,7 +167,7 @@ In addition, OpenEBS also has released as alpha version `kubectl plugin` to help ## See Also: ### [Understanding Data Engines](/docs/next/casengines.html) -### [Understanding Mayastor](/docs/next/mayastor.html) +### [Understanding Mayastor](https://mayastor.gitbook.io/introduction/) ### [Understanding Local PV](/docs/next/localpv.html) ### [Understanding cStor](/docs/next/cstor.html) ### [Understanding Jiva](/docs/next/jiva.html) diff --git a/docs/casengines.md b/docs/casengines.md index 70b2f0994..d9e8b385a 100644 --- a/docs/casengines.md +++ b/docs/casengines.md @@ -132,7 +132,7 @@ Replicated Volumes as the name suggests, are those that can synchronously replic Depending on the type of storage attached to your Kubernetes worker nodes and application performance requirements, you can select from [Jiva](/docs/next/jiva.html), [cStor](/docs/next/cstor.html) or [Mayastor](/docs/next/mayastor.html). -- [Mayastor](/docs/next/ugmayastor.html) +- [Mayastor](https://mayastor.gitbook.io/introduction/) - [cStor](https://github.com/openebs/cstor-operators/blob/master/docs/quick.md) - [Jiva](https://github.com/openebs/jiva-operator) @@ -230,7 +230,7 @@ A short summary is provided below. ## See Also: -### [Mayastor User Guide](/docs/next/ugmayastor.html) +### [Mayastor User Guide](https://mayastor.gitbook.io/introduction/) ### [cStor User Guide](/docs/next/ugcstor-csi.html) diff --git a/docs/mayastor-concept.md b/docs/mayastor-concept.md deleted file mode 100644 index e90142f8e..000000000 --- a/docs/mayastor-concept.md +++ /dev/null @@ -1,171 +0,0 @@ ---- -id: mayastor -title: Mayastor -sidebar_label: Mayastor ---- ----- - -## What is Mayastor? - -**Mayastor** is currently under development as a sub-project of the Open Source CNCF project [**OpenEBS**](https://openebs.io/). OpenEBS is a "Container Attached Storage" or CAS solution which extends Kubernetes with a declarative data plane, providing flexible, persistent storage for stateful applications. - -Design goals for Mayastor include: - -* Highly available, durable persistence of data. -* To be readily deployable and easily managed by autonomous SRE or development teams. -* To be a low-overhead abstraction. - -OpenEBS Mayastor incorporates Intel's [Storage Performance Development Kit](https://spdk.io/). It has been designed from the ground up to leverage the protocol and compute efficiency of NVMe-oF semantics, and the performance capabilities of the latest generation of solid-state storage devices, in order to deliver a storage abstraction with performance overhead measured to be within the range of single-digit percentages. - -By comparison, most pre-CAS shared everything storage systems are widely thought to impart an overhead of at least 40% and sometimes as much as 80% or more than the capabilities of the underlying devices or cloud volumes. Additionally, pre-CAS shared storage scales in an unpredictable manner as I/O from many workloads interact and complete for the capabilities of the shared storage system. - -While Mayastor utilizes NVMe-oF, it does not require NVMe devices or cloud volumes to operate, as is explained below. - - - ->**Mayastor is beta software**. It is considered largely, if not entirely, feature complete and substantially without major known defects. Minor and unknown defects can be expected; **please deploy accordingly**. - -### Basic Architecture - -The objective of this section is to provide the user and evaluator of Mayastor with a topological view of the gross anatomy of a Mayastor deployment. A description will be made of the expected pod inventory of a correctly deployed cluster, the roles and functions of the constituent pods and related Kubernetes resource types, and the high-level interactions between them and the orchestration thereof. - -More detailed guides to Mayastor's components, their design and internal structure, and instructions for building Mayastor from source, are maintained within the [project's GitHub repository](https://github.com/openebs/Mayastor). - -### Topology - - - -### Dramatis Personae - -| Name | Resource Type | Function | Occurrence in Cluster | -| :--- | :--- | :--- | :--- | -| **moac** | Pod | Hosts control plane containers | Single | -| moac | Service | Exposes MOAC REST service end point | Single | -| moac | Deployment | Declares desired state for the MOAC pod | Single | -| | | | | -| **mayastor-csi** | Pod | Hosts CSI Driver node plugin containers | All worker nodes | -| mayastor-csi | DaemonSet | Declares desired state for mayastor-csi pods | Single | -| | | | | -| **mayastor** | Pod | Hosts Mayastor I/O engine container | User-selected nodes | -| mayastor | DaemonSet | Declares desired state for Mayastor pods | Single | -| | | | | -| **nats** | Pod | Hosts NATS Server container | Single | -| nats | Deployment | Declares desired state for NATS pod | Single | -| nats | Service | Exposes NATS message bus end point | Single | -| | | | | -| **mayastornodes** | CRD | Inventories and reflects the state of Mayastor pods | One per Mayastor Pod | -| **mayastorpools** | CRD | Declares a Mayastor pool's desired state and reflects its current state | User-defined, zero to many | -| **mayastorvolumes** | CRD | Inventories and reflects the state of Mayastor-provisioned Persistent Volumes | User-defined, zero to many | - -### Component Roles - -#### MOAC - -A Mayastor deployment features a single MOAC pod, declared via a Deployment resource of the same name and has its API's gRPC endpoint exposed via a cluster Service, also of the same name. The MOAC pod is the principle control plane actor and encapsulates containers for both the Mayastor CSI Driver's controller implementation \(and its external-attacher sidecar\) and the MOAC component itself. - -The MOAC component implements the bulk of the Mayastor-specific control plane. It is called by the CSI Driver controller in response to dynamic volume provisioning events, to orchestrate the creation of a nexus at the Mayastor pod of an appropriate node and also the creation of any additional data replicas on other nodes as may be required to satisfy the desired configuration state of the PVC \(i.e. replication factor > 1\). MOAC is also responsible for creating and reporting Storage Pools, for which purpose it implements a watch on the Kubernetes API server for relevant custom resource objects \(mayastorpools.openebs.io\). - -MOAC exposes a REST API endpoint on the cluster using a Kubernetes Service of the same name. This is currently used to support the export of volume metrics to Prometheus/Grafana, although this mechanism will change in later releases. - -#### Mayastor - -The Mayastor pods of a deployment are its principle data plane actors, encapsulating the Mayastor containers which implement the I/O path from the block devices at the persistence layer, up to the relevant initiators on the worker nodes mounting volume claims. - -The instance of the `mayastor` binary running inside the container performs four major classes of functions: - -* Present a gRPC interface to the MOAC control plane component, to allow the latter to orchestrate creation, configuration and deletion of Mayastor managed objects hosted by that instance -* Create and manage storage pools hosted on that node -* Create, export and manage nexus objects \(and by extension, volumes\) hosted on that node -* Create and share "replicas" from storage pools hosted on that node - * Local replica -> loopback -> Local Nexus - * Local replica - > NVMe-F TCP -> Remote Nexus \(hosted by a Mayastor container on another node\) - * Remote replicas are employed by a Nexus as synchronous data copies, where replication is in use - -When a Mayastor pod starts running, an init container attempts to verify connectivity to the NATS message bus in the Mayastor namespace. If a connection can be established the Mayastor container is started, and the Mayastor instance performs registration with MOAC over the message bus. In this way, MOAC maintains a registry of nodes \(specifically, running Mayastor instances\) and their current state. For each registered Mayastor container/instance, MOAC creates a MayastorNode custom resource within the Kubernetes API of the cluster. - -The scheduling of Mayastor pods is determined declaratively by using a DaemonSet specification. By default, a `nodeSelector` field is used within the pod spec to select all worker nodes to which the user has attached the label `openebs.io/engine=mayastor` as recipients of a Mayastor pod. It is in this way that the MayastorNode count and location is set appropriate to the hardware configuration of the worker nodes \(i.e. which nodes host the block storage devices to be used\), and capacity and performance demands of the cluster. - -#### Mayastor-CSI - -The mayastor-csi pods within a cluster implement the node plugin component of Mayastor's CSI driver. As such, their function is to orchestrate the mounting of Mayastor provisioned volumes on worker nodes on which application pods consuming those volumes are scheduled. By default, a mayastor-csi pod is scheduled on every node in the target cluster, as determined by a DaemonSet resource of the same name. These pods each encapsulate two containers, `mayastor-csi` and `csi-driver-registrar` - - It is not necessary for the node plugin to run on every worker node within a cluster and this behaviour can be modified if so desired through the application of appropriate node labelling and the addition of a corresponding `nodeSelector` entry within the pod spec of the mayastor-csi DaemonSet. It should be noted that if a node does not host a plugin pod, then it will not be possible to schedule pod on it which is configured to mount Mayastor volumes. - -Further detail regarding the implementation of CSI driver components and their function can be found within the Kubernetes CSI Developer Documentation. - -#### NATS - -NATS is a high performance open source messaging system. It is used within Mayastor as the transport mechanism for registration messages passed between Mayastor I/O engine pods running in the cluster and the MOAC component which maintains an inventory of active Mayastor nodes and reflects this via CRUD actions on MayastorNode custom resources. - -In future releases of Mayastor, the control plane will transition towards a more microservice-like architecture following the saga pattern, whereupon a highly available NATS deployment within the Mayastor namespace will be employed as an event bus. -

----- -## Scope - -This quickstart guide describes the actions necessary to perform a basic installation of Mayastor on an existing Kubernetes cluster, sufficient for evaluation purposes. It assumes that the target cluster will pull the Mayastor container images directly from MayaData public container repositories. Where preferred, it is also possible to [build Mayastor locally from source](https://github.com/openebs/Mayastor/blob/develop/doc/build.md) and deploy the resultant images but this is outside of the scope of this quickstart document. - -Deploying and operating Mayastor within production contexts requires a foundational knowledge of Mayastor internals and best practices, found elsewhere within this documentation resource. Some application use cases may require specific configuration and where this is so, it is called out in the Use Cases section. - - ->**Mayastor is beta software**. It is considered largely, if not entirely, feature complete. Beta software "will generally have many more bugs in it than completed software and speed or performance issues, and may still cause crashes or data loss" ----- - -## Known Issues - -### Live Issue Tracker - -Mayastor is currently considered to be beta software. - -> "(it) will generally have many more bugs in it than completed software and speed or performance issues, and may still cause crashes or data loss." - -The project's maintainers operate a live issue tracking dashboard for defects which they have under active triage and investigation. It can be accessed [here](https://mayadata.atlassian.net/secure/Dashboard.jspa?selectPageId=10015). You are strongly encouraged to familiarize yourself with the issues identified there before using Mayastor and when raising issue reports in order to limit to the extent possible redundant issue reporting. - -### How is Mayastor Tested? - -Mayastor's maintainers perform integration and end-to-end testing on nightly builds and named releases. Clusters used to perform this testing are composed of worker nodes running Ubuntu 20.04.2 LTS, using the docker runtime 20.10.5 under Kubernetes version 1.19.8. Other testing efforts are underway including soak testing and failure injection testing. - -We periodically access the labs of partners and community members for scale and performance testing and would welcome offers of any similar or other testing assistance. - -### Common Installation Issues - -#### A Mayastor pod restarts unexpectedly with exit code 132 whilst mounting a PVC - -The Mayastor process has been sent the SIGILL signal as the result of attempting to execute an illegal instruction. This indicates that the host node's CPU does not satisfy the prerequisite instruction set level for Mayastor \(SSE4.2 on x86-64\). - - -#### Deploying Mayastor on RKE & Fedora CoreOS - -In addition to ensuring that the general prerequisites for installation are met, it is necessary to add the following directory mapping to the `services_kubelet->extra_binds` section of the cluster's`cluster.yml file.` - -```text -/opt/rke/var/lib/kubelet/plugins:/var/lib/kubelet/plugins -``` - -If this is not done, CSI socket paths won't match expected values and the Mayastor CSI driver registration process will fail, resulting in the inability to provision Mayastor volumes on the cluster. - -### Other Issues - -#### Lengthy worker node reboot times - -When rebooting a node that runs applications mounting Mayastor volumes, this can take tens of minutes. The reason is the long default NVMe controller timeout (`ctrl_loss_tmo`). The solution is to follow the best k8s practices and cordon the node ensuring there aren't any application pods running on it before the reboot. -

----- - -## Known Limitations - - -### Volume and Pool Capacity Expansion - -Once provisioned, neither Mayastor Disk Pools nor Mayastor Volumes can be re-sized. A Mayastor Pool can have only a single block device as a member. Mayastor Volumes are exclusively thick-provisioned. - -### Snapshots and Clones - -Mayastor has no snapshot or cloning capabilities. - -### Volumes are "Highly Durable" but without multipathing are not "Highly Available" - -Mayastor Volumes can be configured (or subsequently re-configured) to be composed of 2 or more "children" or "replicas"; causing synchronously mirrored copies of the volume's data to be maintained on more than one worker node and Disk Pool. This contributes additional "durability" at the persistence layer, ensuring that viable copies of a volume's data remain even if a Disk Pool device is lost. - -However, a Mayastor volume is currently accessible to an application only via a single target instance (NVMe-oF, or iSCSI) of a single Mayastor pod. If that pod terminates (through the loss of the worker node on which it's scheduled, execution failure, pod eviction etc.) then there will be no viable I/O path to any remaining healthy replicas and access to data on the volume cannot be maintained. - -There has been initial discovery work completed in supporting and testing the use of multipath connectivity to Mayastor pods. The work of developing and supporting production usage of multipath connectivity is currently scheduled to complete after general availability. diff --git a/docs/mayastor.md b/docs/mayastor.md index d71922eca..d6badccf7 100644 --- a/docs/mayastor.md +++ b/docs/mayastor.md @@ -1,395 +1,37 @@ --- -id: ugmayastor -title: Mayastor +id: mayastor +title: Mayastor sidebar_label: Mayastor --- ------- -## Prerequisites - -### General - -Each Mayastor Node \(MSN\), that is each cluster worker node which will host an instance of a Mayastor pod, must have these resources _free and available_ for use by Mayastor: - -* **Two x86-64 CPU cores with SSE4.2 instruction support**: - * Intel Nehalem processor \(march=nehalem\) and newer - * AMD Bulldozer processor and newer -* **4GiB of RAM** -* HugePage support - - * A minimum of **1GiB of** **2MiB-sized** **huge pages** - - -> Instances of the Mayastor pod must be run in privileged mode - - -### Node Count - -As long as the resource prerequisites are met, Mayastor can deployed to a cluster with just a single worker node. However note that in order to evaluate the synchronous replication feature \(N+1 mirroring\), the number of worker nodes to which Mayastor is deployed should be no less than the desired replication factor. E.g. three-way mirroring of Persistent Volumes \(PV\) would require Mayastor to be deployed to a minimum of three worker nodes. - -### Transport Protocols - -Mayastor supports the export and mounting of a Persistent Volume over either NVMe-oF TCP or iSCSI \(configured as a parameter of the PV's underlying StorageClass\). Worker node\(s\) on which a PV is to be mounted must have the requisite initiator support installed and configured for the protocol in use. - -#### iSCSI - -The iSCSI client should be installed and correctly configured as per [this guide](https://docs.openebs.io/docs/next/prerequisites.html). - -#### NVMe-oF - -In order to reliably mount application PVs over NVMe-oF TCP, a worker node's kernel version must be 5.3 or later. Verify that the `nvme-tcp` module is loaded and if necessary, load it. -

---- -## Preparing the Cluster - -### Configure Mayastor Nodes \(MSNs\) - -Within the context of the Mayastor application, a "Mayastor Node" is a Kubernetes worker node on which is scheduled an instance of a Mayastor data plane pod, so is thus capable of hosting a Storage Pool and exporting Persistent Volumes\(PV\). A MSN makes use of block storage device\(s\) attached to it to contribute storage capacity to its pool\(s\), which supply backing storage for the Persistent Volumes provisioned on the parent cluster by Mayastor. - -Kubernetes worker nodes are not required to be MSNs in order to be able to mount Mayastor-provisioned Persistent Volumes for the application pods scheduled on them. New MSN nodes can be provisioned within the cluster at any time after the initial deployment, as aggregate demands for capacity, performance and availability levels increase. - -#### Verify / Enable Huge Page Support - -_2MiB-sized_ Huge Pages must be supported and enabled on a MSN. A minimum number of 512 such pages \(i.e. 1GiB total\) must be available on each node, which should be verified thus: - -``` -grep HugePages /proc/meminfo -``` -Sample output: - -``` -AnonHugePages: 0 kB -ShmemHugePages: 0 kB -HugePages_Total: 1024 -HugePages_Free: 671 -HugePages_Rsvd: 0 -HugePages_Surp: 0 - -``` - -If fewer than 512 pages are available then the page count should be reconfigured as required, accounting for any other workloads which may be co-resident on the worker node and which also require them. For example: - -```text -echo 512 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages -``` - -This change should also be made persistent across reboots by adding the required value to the file`/etc/sysctl.conf` like so: - -```text -echo vm.nr_hugepages = 512 | sudo tee -a /etc/sysctl.conf -``` - - ->If you modify the Huge Page configuration of a node, you _must_ either restart kubelet or reboot the node. Mayastor will not deploy correctly if the available Huge Page count as reported by the node's kubelet instance does not satisfy the minimum requirements. - - -#### Label Mayastor Node Candidates - -All worker nodes in the cluster which are intended to operate as MSNs should be labelled with the OpenEBS engine type "mayastor". This label is used as a selector by the Mayastor Daemonset, which will be deployed during the next stage of the installation. Here we demonstrate the correct labeling of a worker node named "node1": - -```text -kubectl label node node1 openebs.io/engine=mayastor -``` - ----- - -## Deploy Mayastor - -### Overview - -In this Quickstart guide we demonstrate deploying Mayastor by using the Kubernetes manifest files provided within the `deploy`folder of the [Mayastor project's GitHub repository](https://github.com/openebs/Mayastor). The repository is configured for the GitFlow release pattern, wherein the master branch contains official releases. By extension, the head of the master branch represents the latest official release. - -The steps and commands which follow are intended only for use with, and tested against, the latest release. Earlier releases or development versions may require a modified or different installation process. - -### Create Mayastor Application Resources - -#### Namespace - -To create a new namespace, execute: -``` -kubectl create namespace mayastor -``` - -#### RBAC Resources - - -To create RBAC resources, execute: -``` -kubectl create -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/moac-rbac.yaml -``` - - -Sample output: -``` -serviceaccount/moac created -clusterrole.rbac.authorization.k8s.io/moac created -clusterrolebinding.rbac.authorization.k8s.io/moac created -``` - - -#### Custom Resource Definitions - - -``` -kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/csi/moac/crds/mayastorpool.yaml -``` - -### Deploy Mayastor Dependencies - -#### NATS - -Mayastor uses [NATS](https://nats.io/), an Open Source messaging system, as an event bus for some aspects of control plane operations, such as registering Mayastor nodes with MOAC \(Mayastor's primary control plane component\). - -To deploy NATS, execute: -``` -kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/nats-deployment.yaml -``` - -Verify that the deployment of the NATS application to the cluster was successful. Within the mayastor namespace there should be a single pod having a name starting with "nats-", and with a reported status of Running. - -```text -kubectl -n mayastor get pods --selector=app=nats -``` - - -Sample output: -``` -NAME READY STATUS RESTARTS AGE -nats-b4cbb6c96-nbp75 1/1 Running 0 28s -``` - -### Deploy Mayastor Components - -#### CSI Node Plugin - -To deploy CSI deployments, execute: -``` -kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/csi-daemonset.yaml -``` - -Verify that the CSI Node Plugin DaemonSet has been correctly deployed to all worker nodes in the cluster. - -``` -kubectl -n mayastor get daemonset mayastor-csi -``` - - -Sample output: -``` -NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE -mayastor-csi 3 3 3 3 3 kubernetes.io/arch=amd64 26m -``` - - -#### Control Plane - -To deploy the control plane components, execute: -``` -kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/moac-deployment.yaml -``` - - -Verify that the MOAC control plane pod is running, execute: - -```text -kubectl get pods -n mayastor --selector=app=moac -``` - -Sample output: -``` -NAME READY STATUS RESTARTS AGE -moac-7d487fd5b5-9hj62 3/3 Running 0 8m4s -``` - - -#### Data Plane - -To deploy Data Plane components, execute: -``` -kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/mayastor-daemonset.yaml -``` - - -Verify that the Mayastor DaemonSet has been correctly deployed. The reported Desired, Ready and Available instance counts should be equal, and should match the count of worker nodes which carry the label `openebs.io/engine=mayastor` \(as performed earlier in the "[Preparing the Cluster](preparing-the-cluster.md#label-the-storage-nodes)" stage\). - -To get the list of deployed daemonset, execute: -``` -kubectl -n mayastor get daemonset mayastor -``` - -Sample output: -``` -NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE -mayastor 3 3 3 3 3 kubernetes.io/arch=amd64,openebs.io/engine=mayastor 108s -``` - -For each resulting Mayastor pod instance, a Mayastor Node \(MSN\) custom resource definition should be created. List these definitions and verify that the count meets the expected number and that all nodes are reporting their State as `online` - -To obtain the list of MSN, execute: -``` -kubectl -n mayastor get msn -``` - -Sample output: -``` -NAME STATE AGE -aks-agentpool-12194210-0 online 8m18s -aks-agentpool-12194210-1 online 8m19s -aks-agentpool-12194210-2 online 8m15s -``` ----- - -## Configure Mayastor - -### Create Mayastor Pool\(s\) - -#### What is a Mayastor Pool \(MSP\)? - -When a Mayastor Node \(MSN\) allocates storage capacity for a Persistent Volume \(PV\) it does so from a construct named a Mayastor Pool \(MSP\). Each MSN may create and manage zero, one, or more such pools. The ownership of a pool by a MSN is exclusive. In the current version of Mayastor, a pool may have only a single block device member, which constitutes the entire data persistence layer for that pool and thus determines its maximum capacity. - -A pool is defined declaratively, through the creation of a corresponding `MayastorPool`custom resource \(CR\) on the cluster. User configurable parameters of this CR type include a unique name for the pool, the host name of the MSN on which it is hosted and a reference to a disk device which is accessible from that node \(for inclusion within the pool\). The pool definition allows the reference to its member block device to adhere to one of a number of possible schemes, each associated with a specific access mechanism/transport/device type and differentiated by corresponding performance and/or attachment locality. - -##### Permissible Schemes for the MSP CRD field `disks` - -| Type | Format | Example | -| :--- | :--- | :--- | -| Attached Disk Device | Device File | /dev/sdx | -| NVMe | NQN | nvme://nqn.2014-08.com.vendor:nvme:nvm-subsystem-sn-d78432 | -| iSCSI | IQN | iscsi://iqn.2000-08.com.datacore.com:cloudvm41-2 | -| Async. Disk I/O \(AIO\) | Device File | aio:///dev/sdx | -| io\_uring | Device File | io\_uring:///dev/sdx | -| RAM drive | Custom | malloc:///malloc0?size\_mb=1024 | - -Once a Mayastor node has created a pool it is assumed that it henceforth has exclusive use of the associated block device; it should not be partitioned, formatted, or shared with another application or process. Any existing data on the device will be destroyed. - -#### Configure Pool\(s\) for Use with this Quickstart - -To continue with this quick start exercise, a minimum of one pool is necessary, created and hosted by one of the MSNs in the cluster. However the number of pools available limits the extent to which the synchronous n-way mirroring feature \("replication"\) of Persistent Volumes can configured for testing and evaluation; the number of pools configured should be no lower than the desired maximum replication factor of the PVs to be created. Note also that when placing data replicas, to provide appropriate redundancy, Mayastor's control plane will avoid locating more than one replica of a PV on the same MSN. Therefore, for example, the minimum viable configuration for a Mayastor deployment which is intended to test 3-way mirrored PVs must have three Mayastor Nodes, each having one Mayastor Pool, with each of those pools having one unique block device allocated to it. - -Using one or more the following examples as templates, create the required type and number of pools. - - -Example MSP CRD \(device file\): -```text -cat <When following the examples in order to create your own Mayastor Pool\(s\), remember to replace the values for the fields "name", "node" and "disks" as appropriate to your cluster's intended configuration. Note that whilst the "disks" parameter accepts an array of scheme values, the current version of Mayastor supports only one disk device per pool. - - -#### Verify Pool Creation and Status - -The status of Mayastor Pools may be determined by reference to their cluster CRs. Available, healthy pools should report their State as `online`. Verify that the expected number of pools have been created and that they are online. - - -To verify, execute: -```text -kubectl -n mayastor get msp -``` +## What is Mayastor? +**Mayastor** is currently under development as a sub-project of the Open Source CNCF project [**OpenEBS**](https://openebs.io/). OpenEBS is a "Container Attached Storage" or CAS solution which extends Kubernetes with a declarative data plane, providing flexible, persistent storage for stateful applications. -Sample output: -``` -NAME NODE STATE AGE -pool-on-node-0 aks-agentpool-12194210-0 online 127m -pool-on-node-1 aks-agentpool-12194210-1 online 27s -pool-on-node-2 aks-agentpool-12194210-2 online 4s -``` +Design goals for Mayastor include: +* Highly available, durable persistence of data. +* To be readily deployable and easily managed by autonomous SRE or development teams. +* To be a low-overhead abstraction. -### Create Mayastor StorageClass\(s\) +OpenEBS Mayastor incorporates Intel's [Storage Performance Development Kit](https://spdk.io/). It has been designed from the ground up to leverage the protocol and compute efficiency of NVMe-oF semantics, and the performance capabilities of the latest generation of solid-state storage devices, in order to deliver a storage abstraction with performance overhead measured to be within the range of single-digit percentages. -Mayastor dynamically provisions Persistent Volumes \(PV\) based on custom StorageClass definitions defined by the user. Parameters of the StorageClass resource definition are used to set the characteristics and behavior of its associated PVs. In the current version of Mayastor, StorageClass definitions are used to control both which transport protocol is used to mount the PV to the worker node hosting the consuming application pod \(iSCSI, or NVMe-oF TCP\) and the level of data protection afforded to it \(that is, the number of synchronous data replicas which are maintained, for purposes of redundancy\). It is possible to create any number of custom StorageClass definitions to span this range of permutations. +By comparison, most pre-CAS shared everything storage systems are widely thought to impart an overhead of at least 40% and sometimes as much as 80% or more than the capabilities of the underlying devices or cloud volumes. Additionally, pre-CAS shared storage scales in an unpredictable manner as I/O from many workloads interact and complete for the capabilities of the shared storage system. -We illustrate this quickstart guide with two examples of possible use cases; one which uses iSCSI and offers no data protection \(i.e. a single data replica\), and another using NVMe-oF TCP transport and having three data replicas. You may modify these as required to match your own desired test cases, within the limitations of the cluster under test. +While Mayastor utilizes NVMe-oF, it does not require NVMe devices or cloud volumes to operate, as is explained below. +>**Mayastor is beta software** and is under active development. - iSCSI Example: -```text -cat <Note: Permissible values for the field "protocol" are either "iscsi", or "nvmf" +## Community Support via Slack +OpenEBS has a vibrant community that can help you get started. If you have further question and want to learn more about OpenEBS and/or Mayastor, please join [OpenEBS community on Kubernetes Slack](https://kubernetes.slack.com). If you are already signed up, head to our discussions at [#openebs](https://kubernetes.slack.com/messages/openebs/) channel. -**Action: Create the StorageClass\(es\) appropriate to your intended testing scenario\(s\).** +
+
diff --git a/docs/overview.md b/docs/overview.md index 37915ee1f..8aa6725dd 100644 --- a/docs/overview.md +++ b/docs/overview.md @@ -84,7 +84,7 @@ Depending on the type of storage attached to your Kubernetes worker nodes and ap Installing OpenEBS in your cluster is as simple as running a few `kubectl` or `helm` commands. Here are the list of our Quickstart guides with detailed instructions for each storage engine. -- [Mayastor](/docs/next/ugmayastor.html) +- [Mayastor](https://mayastor.gitbook.io/introduction/) - [cStor](https://github.com/openebs/cstor-operators/blob/master/docs/quick.md) - [Jiva](https://github.com/openebs/jiva-operator) diff --git a/docs/quickstart.md b/docs/quickstart.md index 4ca34d6b9..67a04ff83 100644 --- a/docs/quickstart.md +++ b/docs/quickstart.md @@ -85,7 +85,7 @@ As a Platform SRE / Cluster Administrator, you can customize several things abou - [Local PV Rawfile](https://github.com/openebs/rawfile-localpv) - [Replicated PV Jiva](https://github.com/openebs/jiva-operator) - [Replicated PV cStor](https://github.com/openebs/cstor-operators/blob/master/docs/quick.md) -- [Replicated PV Mayastor](/docs/next/ugmayastor.html) +- [Replicated PV Mayastor](https://mayastor.gitbook.io/introduction/) ### 3. Deploy Stateful Workloads diff --git a/docs/releases.md b/docs/releases.md index b753b1c71..f65e7af3d 100644 --- a/docs/releases.md +++ b/docs/releases.md @@ -56,7 +56,7 @@ If you have any questions or need help with the migration please reach out to us OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v2.11.0 are as follows: - CSI Drivers - - [Mayastor](https://docs.openebs.io/docs/next/ugmayastor.html) 0.8.1 (beta) + - [Mayastor](https://docs.openebs.io/docs/next/mayastor.html) 0.8.1 (beta) - [cStor](https://github.com/openebs/cstor-operators) 2.11.0 (beta) - [Jiva](https://github.com/openebs/jiva-operator) 2.11.0 (beta) - [Local PV ZFS](https://github.com/openebs/zfs-localpv) 1.9.0 (stable) @@ -125,7 +125,7 @@ A very special thanks to @cncf and 2021 LFX Mentees @ParthS007, @rahul799 for co OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v2.10.0 are as follows: - CSI Drivers - - [Mayastor](https://docs.openebs.io/docs/next/ugmayastor.html) 0.8.0 (beta) + - [Mayastor](https://docs.openebs.io/docs/next/mayastor.html) 0.8.0 (beta) - [cStor](https://github.com/openebs/cstor-operators) 2.10.0 (beta) - [Jiva](https://github.com/openebs/jiva-operator) 2.10.0 (beta) - [Local PV ZFS](https://github.com/openebs/zfs-localpv) 1.8.0 (stable) @@ -174,7 +174,7 @@ OpenEBS v2.9 is another maintenance release before moving towards 3.0 primarily OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners. The status of the various components as of v2.9.0 are as follows: - CSI Drivers - - [Mayastor](https://docs.openebs.io/docs/next/ugmayastor.html) 0.8.0 (beta) + - [Mayastor](https://docs.openebs.io/docs/next/mayastor.html) 0.8.0 (beta) - [cStor](https://github.com/openebs/cstor-operators) 2.9.0 (beta) - [Jiva](https://github.com/openebs/jiva-operator) 2.9.0 (beta) - [Local PV ZFS](https://github.com/openebs/zfs-localpv) 1.7.0 (stable) diff --git a/docs/ugmayastor.md b/docs/ugmayastor.md new file mode 100644 index 000000000..b5f776404 --- /dev/null +++ b/docs/ugmayastor.md @@ -0,0 +1,11 @@ +--- +id: ugmayastor +title: Mayastor User Guide +sidebar_label: Mayastor +--- +------ + + :::note + This page has moved. Mayastor documentation is hosted and actively maintained at https://mayastor.gitbook.io/introduction/ + ::: + diff --git a/website/sidebars.json b/website/sidebars.json index 45a74b373..3c5f7ee82 100644 --- a/website/sidebars.json +++ b/website/sidebars.json @@ -28,7 +28,6 @@ "ugndm", "ugcstor-csi", "jivaguide", - "ugmayastor", "uglocalpv-hostpath", "uglocalpv-device", "mayactl", From 30becd02fcc7c8e6594158e75b090dbb9b181c07 Mon Sep 17 00:00:00 2001 From: kmova Date: Thu, 22 Jul 2021 18:08:55 +0530 Subject: [PATCH 2/4] setup gitbook as valid word Signed-off-by: kmova --- hack/cspell-words.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/hack/cspell-words.txt b/hack/cspell-words.txt index 23d664880..015d948d5 100644 --- a/hack/cspell-words.txt +++ b/hack/cspell-words.txt @@ -217,6 +217,7 @@ gcpbucket gettime gemfiles gimsuy +gitbook githubusercontent gitlab gotgt From 5cbbc25882424f701521402f7add03146d0626af Mon Sep 17 00:00:00 2001 From: kmova Date: Thu, 22 Jul 2021 18:10:45 +0530 Subject: [PATCH 3/4] bring back the user guide with note Signed-off-by: kmova --- website/sidebars.json | 1 + 1 file changed, 1 insertion(+) diff --git a/website/sidebars.json b/website/sidebars.json index 3c5f7ee82..adb182bb1 100644 --- a/website/sidebars.json +++ b/website/sidebars.json @@ -28,6 +28,7 @@ "ugndm", "ugcstor-csi", "jivaguide", + "ugmayastor", "uglocalpv-hostpath", "uglocalpv-device", "mayactl", From 00a499f7040271c57fb659885ac8d1db53c0a86c Mon Sep 17 00:00:00 2001 From: kmova Date: Thu, 22 Jul 2021 18:39:25 +0530 Subject: [PATCH 4/4] redirect mayastor troubleshooting doc Signed-off-by: kmova --- docs/t-mayastor.md | 315 +-------------------------------------------- 1 file changed, 3 insertions(+), 312 deletions(-) diff --git a/docs/t-mayastor.md b/docs/t-mayastor.md index 2c7545392..1a69431ad 100644 --- a/docs/t-mayastor.md +++ b/docs/t-mayastor.md @@ -6,316 +6,7 @@ sidebar_label: Mayastor ------ # Troubleshooting -## Logs + :::note + This page has moved to https://mayastor.gitbook.io/introduction/quickstart/troubleshooting. + ::: -The right log file to collect depends on the nature of the problem. If unsure, -then the best thing is to collect log files for all Mayastor containers. - -To list all Mayastor pods execute, -``` -kubectl -n mayastor get pods -o wide -``` - -Sample output: -``` -NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES -mayastor-csi-7pg82 2/2 Running 0 15m 10.0.84.131 worker-2 -mayastor-csi-node5 2/2 Running 0 15m 10.0.239.174 worker-1 -mayastor-csi-xrmxx 2/2 Running 0 15m 10.0.85.71 worker-0 -mayastor-node6 1/1 Running 0 14m 10.0.85.71 worker-0 -mayastor-qr84q 1/1 Running 0 14m 10.0.239.174 worker-1 -mayastor-node1 1/1 Running 0 14m 10.0.84.131 worker-2 -moac-b8f4446b5-r5gwk 3/3 Running 0 15m 10.244.2.2 worker-2 -nats-6fdd6dfb4f-node2 1/1 Running 0 16m 10.244.3.2 worker-0 -``` - - -### moac's log file - -`moac` is the control plane of Mayastor. There is only one `moac` container running -in the cluster. It is generally useful as it captures all high-level operations -related to mayastor volumes in the cluster. So it is a good idea to always -inspect this log file. - -To obtaining moac's log, execute: -``` -kubectl -n mayastor logs $(kubectl -n mayastor get pod -l app=moac -o jsonpath="{.items[0].metadata.name}") moac -``` -Sample output: -``` -Mar 09 10:44:47.560 info [csi]: CSI server listens at /var/lib/csi/sockets/pluginproxy/csi.sock -Mar 09 10:44:47.565 debug [nats]: Connecting to NATS at "nats" ... -Mar 09 10:44:47.574 info [node-operator]: Initializing node operator -Mar 09 10:44:47.602 info [nats]: Connected to NATS message bus at "nats" -Mar 09 10:44:47.631 info [node-operator]: Created CRD mayastornode -Mar 09 10:44:47.709 debug [watcher]: mayastornode watcher with 0 objects was started -Mar 09 10:44:47.710 trace: Initial content of the "mayastornode" cache: -Mar 09 10:44:47.711 info [pool-operator]: Initializing pool operator -Mar 09 10:44:47.729 info [pool-operator]: Created CRD mayastorpool -Mar 09 10:44:47.787 debug [watcher]: mayastorpool watcher with 0 objects was started -Mar 09 10:44:47.788 trace: Initial content of the "mayastorpool" cache: -Mar 09 10:44:47.788 info: Warming up will take 7 seconds ... -Mar 09 10:44:53.335 debug [csi]: probe request (ready=false) -Mar 09 10:44:54.201 debug [csi]: probe request (ready=false) -Mar 09 10:44:54.788 info [volume-operator]: Initializing volume operator -Mar 09 10:44:54.803 info [volume-operator]: Created CRD mayastorvolume -Mar 09 10:44:56.339 debug [csi]: probe request (ready=false) -Mar 09 10:44:57.340 debug [csi]: probe request (ready=false) -Mar 09 10:44:57.861 debug [watcher]: mayastorvolume watcher with 0 objects was started -Mar 09 10:44:57.861 trace: Initial content of the "mayastorvolume" cache: -Mar 09 10:44:57.866 info [api]: API server listening on port 4000 -Mar 09 10:44:57.866 info: MOAC is warmed up and ready to 🚀 -Mar 09 10:44:58.206 debug [csi]: probe request (ready=true) -``` - -### mayastor's log file - -mayastor containers form the data plane of Mayastor. A cluster should schedule as many container instances as -storage nodes that have been configured. This log file is most useful when troubleshooting -IO errors. Management operations could also fail because of -a problem on the storage node. - -To obtain mayastor's log execute: -``` -kubectl -n mayastor logs mayastor-node6 mayastor -``` - -### mayastor CSI agent's log file - -When having a problem with (un)mounting volume on an application node, this log -file can be useful. Generally, all nodes in the cluster run mayastor CSI agent, -so it's good to know which node is having the problem and inspect the log file -only on that node. - - -To obtain mayastor CSI driver's log execute: -``` -kubectl -n mayastor logs mayastor-csi-7pg82 mayastor-csi -``` - -### CSI sidecars - -These containers implement the CSI spec for Kubernetes and run in the same pods -with moac and mayastor-csi containers. Although they are not part of -Mayastor, they can contain useful information when Mayastor CSI -control/node plugin fails to register with k8s cluster. - - -To obtain CSI control containers logs execute: -``` -kubectl -n mayastor logs $(kubectl -n mayastor get pod -l app=moac -o jsonpath="{.items[0].metadata.name}") csi-attacher -kubectl -n mayastor logs $(kubectl -n mayastor get pod -l app=moac -o jsonpath="{.items[0].metadata.name}") csi-provisioner -``` - - -To obtain CSI node container log execute: -``` -kubectl -n mayastor logs mayastor-csi-7pg82 csi-driver-registrar -``` - - -## Coredumps - -A coredump is a snapshot of process' memory with auxiliary information -(PID, state of registers, etc.) saved to a file. It is used for post-mortem -analysis and it is generated by the operating system in case of a severe error -(i.e. memory corruption). Using a coredump for a problem analysis requires -deep knowledge of program internals and is usually done only by developers. -However, there is a very useful piece of information that even users can -retrieve and this information alone can often identify the root cause of -the problem. It is the stack (backtrace). Put differently, the thing that the -program was doing at the time when it crashed. Here we describe how to get it. -The steps as shown apply specifically to Ubuntu; other Linux distros might employ variations. - -We rely on systemd-coredump that saves and manages coredumps on the system, -`coredumpctl` utility that is part of the same package and finally -the `gdb` debugger. - - -To install systemd-coredump and gdb execute: -``` -sudo apt-get install -y systemd-coredump gdb lz4 -``` - -If installed correctly, then the global core pattern will be set so that all -generated coredumps will be piped to the `systemd-coredump` binary. - - -Next, verify coredump configuration, -``` -cat /proc/sys/kernel/core_pattern -``` - -Sample output: -``` -|/lib/systemd/systemd-coredump %P %u %g %s %t 9223372036854775808 %h -``` - -To list coredumps execute: - -``` -coredumpctl list -``` - -Sample Output: -``` -TIME PID UID GID SIG COREFILE EXE -Tue 2021-03-09 17:43:46 UTC 206366 0 0 6 present /bin/mayastor -``` - - -If there is a new coredump from mayastor container, the coredump alone won't -be that useful. GDB needs to access the binary of crashed process in order -to be able to print at least some information in the backtrace. For that, we -need to copy the contents of the container's filesystem to the host. - - -To get the ID of the mayastor container execute: -``` -docker ps | grep mayadata/mayastor -``` - Sample output: -``` -b3db4615d5e1 mayadata/mayastor "sleep 100000" 27 minutes ago Up 27 minutes k8s_mayastor_mayastor-n682s_mayastor_51d26ee0-1a96-44c7-85ba-6e50767cd5ce_0 -d72afea5bcc2 mayadata/mayastor-csi "/bin/mayastor-csi -…" 7 hours ago Up 7 hours k8s_mayastor-csi_mayastor-csi-xrmxx_mayastor_d24017f2-5268-44a0-9fcd-84a593d7acb2_0 -``` - -Next, you need to copy the relevant parts of the container's fs. To copy, execute: -``` -mkdir -p /tmp/rootdir -docker cp b3db4615d5e1:/bin /tmp/rootdir -docker cp b3db4615d5e1:/nix /tmp/rootdir -``` - -Now we can start GDB. *Don't use* `coredumpctl` command for starting the -debugger. It invokes GDB with invalid path to the debugged binary hence -stack unwinding fails for Rust functions then. At first we extract the -compressed coredump. - -To find the location of the compressed coredump execute: -``` -coredumpctl info | grep Storage | awk '{ print $2 }' -``` - -Sample output: -``` -/var/lib/systemd/coredump/core.mayastor.0.6a5e550e77ee4e77a19bd67436ce7a98.64074.1615374302000000000000.lz4 -``` - -To extract the coredump execute: -``` -sudo lz4cat /var/lib/systemd/coredump/core.mayastor.0.6a5e550e77ee4e77a19bd67436ce7a98.64074.1615374302000000000000.lz4 >core -``` - -Next, open coredump in GDB, execute: -``` -gdb -c core /tmp/rootdir/bin/mayastor -``` - - Sample output: -``` -GNU gdb (Ubuntu 9.2-0ubuntu1~20.04) 9.2 -Copyright (C) 2020 Free Software Foundation, Inc. -License GPLv3+: GNU GPL version 3 or later -This is free software: you are free to change and redistribute it. -There is NO WARRANTY, to the extent permitted by law. -Type "show copying" and "show warranty" for details. -This GDB was configured as "x86_64-linux-gnu". -Type "show configuration" for configuration details. -For bug reporting instructions, please see: -. -Find the GDB manual and other documentation resources online at: - . - -For help, type "help". -Type "apropos word" to search for commands related to "word"... -[New LWP 13] -[New LWP 17] -[New LWP 14] -[New LWP 16] -[New LWP 18] -Core was generated by `/bin/mayastor -l0 -n nats'. -Program terminated with signal SIGABRT, Aborted. -#0 0x00007ffdad99fb37 in clock_gettime () -[Current thread is 1 (LWP 13)] -``` - -Once in GDB we need to set a sysroot so that GDB knows where to find the -binary for the debugged program. - -To set sysroot in GDB, execute: -``` -set auto-load safe-path /tmp/rootdir -set sysroot /tmp/rootdir -``` -Sample output: -``` -Reading symbols from /tmp/rootdir/nix/store/f1gzfqq10dlha1qw10sqvgil34qh30af-systemd-246.6/lib/libudev.so.1... -(No debugging symbols found in /tmp/rootdir/nix/store/f1gzfqq10dlha1qw10sqvgil34qh30af-systemd-246.6/lib/libudev.so.1) -Reading symbols from /tmp/rootdir/nix/store/0kdiav729rrcdwbxws653zxz5kngx8aa-libspdk-dev-21.01/lib/libspdk.so... -Reading symbols from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libdl.so.2... -(No debugging symbols found in /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libdl.so.2) -Reading symbols from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libgcc_s.so.1... -(No debugging symbols found in /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libgcc_s.so.1) -Reading symbols from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libpthread.so.0... -... -``` - - -After that we can print backtrace(s). - - -To obtain backtraces for all threads in GDB, execute: -``` -thread apply all bt -``` - -Sample output: -``` -Thread 5 (Thread 0x7f78248bb640 (LWP 59)): -#0 0x00007f7825ac0582 in __lll_lock_wait () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libpthread.so.0 -#1 0x00007f7825ab90c1 in pthread_mutex_lock () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libpthread.so.0 -#2 0x00005633ca2e287e in async_io::driver::main_loop () -#3 0x00005633ca2e27d9 in async_io::driver::UNPARKER::{{closure}}::{{closure}} () -#4 0x00005633ca2e27c9 in std::sys_common::backtrace::__rust_begin_short_backtrace () -#5 0x00005633ca2e27b9 in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} () -#6 0x00005633ca2e27a9 in as core::ops::function::FnOnce<()>>::call_once () -#7 0x00005633ca2e26b4 in core::ops::function::FnOnce::call_once{{vtable-shim}} () -#8 0x00005633ca723cda in as core::ops::function::FnOnce>::call_once () at /rustc/d1206f950ffb76c76e1b74a19ae33c2b7d949454/library/alloc/src/boxed.rs:1546 -#9 as core::ops::function::FnOnce>::call_once () at /rustc/d1206f950ffb76c76e1b74a19ae33c2b7d949454/library/alloc/src/boxed.rs:1546 -#10 std::sys::unix::thread::Thread::new::thread_start () at /rustc/d1206f950ffb76c76e1b74a19ae33c2b7d949454//library/std/src/sys/unix/thread.rs:71 -#11 0x00007f7825ab6e9e in start_thread () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libpthread.so.0 -#12 0x00007f78259e566f in clone () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libc.so.6 - -Thread 4 (Thread 0x7f7824cbd640 (LWP 57)): -#0 0x00007f78259e598f in epoll_wait () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libc.so.6 -#1 0x00005633ca2e414b in async_io::reactor::ReactorLock::react () -#2 0x00005633ca583c11 in async_io::driver::block_on () -#3 0x00005633ca5810dd in std::sys_common::backtrace::__rust_begin_short_backtrace () -#4 0x00005633ca580e5c in core::ops::function::FnOnce::call_once{{vtable-shim}} () -#5 0x00005633ca723cda in as core::ops::function::FnOnce>::call_once () at /rustc/d1206f950ffb76c76e1b74a19ae33c2b7d949454/library/alloc/src/boxed.rs:1546 -#6 as core::ops::function::FnOnce>::call_once () at /rustc/d1206f950ffb76c76e1b74a19ae33c2b7d949454/library/alloc/src/boxed.rs:1546 -#7 std::sys::unix::thread::Thread::new::thread_start () at /rustc/d1206f950ffb76c76e1b74a19ae33c2b7d949454//library/std/src/sys/unix/thread.rs:71 -#8 0x00007f7825ab6e9e in start_thread () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libpthread.so.0 -#9 0x00007f78259e566f in clone () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libc.so.6 - -Thread 3 (Thread 0x7f78177fe640 (LWP 61)): -#0 0x00007f7825ac08b7 in accept () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libpthread.so.0 -#1 0x00007f7825c930bb in socket_listener () from /tmp/rootdir/nix/store/0kdiav729rrcdwbxws653zxz5kngx8aa-libspdk-dev-21.01/lib/libspdk.so -#2 0x00007f7825ab6e9e in start_thread () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libpthread.so.0 -#3 0x00007f78259e566f in clone () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libc.so.6 - -Thread 2 (Thread 0x7f7817fff640 (LWP 60)): -#0 0x00007f78259e598f in epoll_wait () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libc.so.6 -#1 0x00007f7825c7f174 in eal_intr_thread_main () from /tmp/rootdir/nix/store/0kdiav729rrcdwbxws653zxz5kngx8aa-libspdk-dev-21.01/lib/libspdk.so -#2 0x00007f7825ab6e9e in start_thread () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libpthread.so.0 -#3 0x00007f78259e566f in clone () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libc.so.6 - -Thread 1 (Thread 0x7f782559f040 (LWP 56)): -#0 0x00007fff849bcb37 in clock_gettime () -#1 0x00007f78259af1d0 in clock_gettime@GLIBC_2.2.5 () from /tmp/rootdir/nix/store/a6rnjp15qgp8a699dlffqj94hzy1nldg-glibc-2.32/lib/libc.so.6 -#2 0x00005633ca23ebc5 in as tokio::park::Park>::park () -#3 0x00005633ca2c86dd in mayastor::main () -#4 0x00005633ca2000d6 in std::sys_common::backtrace::__rust_begin_short_backtrace () -#5 0x00005633ca2cad5f in main () -```