Skip to content

Commit

Permalink
comment fixes (#2252)
Browse files Browse the repository at this point in the history
  • Loading branch information
abby-cyber authored Sep 7, 2023
1 parent 0110543 commit 72a2dbb
Show file tree
Hide file tree
Showing 23 changed files with 111 additions and 129 deletions.
3 changes: 1 addition & 2 deletions docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,8 @@ The BR has the following features. It supports:
- Supports full backup, but not incremental backup.
- Currently, NebulaGraph Listener and full-text indexes do not support backup.
- If you back up data to the local disk, the backup files will be saved in the local path of each server. You can also mount the NFS on your host to restore the backup data to a different host.
- During the backup process, both DDL and DML statements in any specified graph spaces are blocked. We recommend that you do the operation within the low peak period of the business, for example, from 2:00 AM to 5:00 AM.
- The backup graph space can be restored to the original cluster only. Cross clusters restoration is not supported. Make sure the number of hosts in the cluster is not changed. Restoring a specified graph space will delete all other graph spaces in the cluster.
- Restoration requires that the number of the storage servers in the original cluster is the same as that of the storage servers in the target cluster and storage server IPs must be the same. Restoring the specified space will clear all the remaining spaces in the cluster.
- During the backup process, both DDL and DML statements in any specified graph spaces are blocked. We recommend that you do the operation within the low peak period of the business, for example, from 2:00 AM to 5:00 AM.
- During the restoration process, there is a time when NebulaGraph stops running.
- Using BR in a container-based NebulaGraph cluster is not supported.
<!---When backing up or restoring the data deployed in Docker, network configuration should be done, such as IP and port mapping. -->
Expand Down
2 changes: 1 addition & 1 deletion docs-2.0/backup-and-restore/nebula-br/2.compile-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ In **each machine**, follow these steps:
Before starting Agent, make sure that the Meta service has been started and Agent has read and write access to the corresponding NebulaGraph cluster directory and backup directory.
```
sudo nohup ./agent --agent="<agent_node_ip>:8888" --meta="<metad_node_ip>:9559" > nebula_agent.log 2>&1 &
sudo nohup ./agent --agent="<agent_node_ip>:8888" --meta="<metad_node_ip>:9559" --ratelimit=<file_size_bt> > nebula_agent.log 2>&1 &
```
- `--agent`: The IP address and port number of Agent.
Expand Down
6 changes: 1 addition & 5 deletions docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,7 @@ If you use the BR to back up data, you can use it to restore the data to NebulaG

## Prerequisites

To restore data with the BR, do a check of these:

- [Install BR and Agent](2.compile-br.md) and run Agent on each host in the cluster.

- Download [nebula-agent](https://github.com/vesoft-inc/nebula-agent) and start the agent service in each cluster(including metad, storaged, graphd) host.
- [Install BR and Agent](2.compile-br.md) and run Agent on each host in the cluster.

- No application is connected to the target NebulaGraph cluster.

Expand Down
2 changes: 1 addition & 1 deletion docs-2.0/nebula-console.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ NebulaGraph Console is a native CLI client for NebulaGraph. It can be used to co

## Compatibility with NebulaGraph

See [github](https://github.com/vesoft-inc/nebula-console/tree/{{console.branch}}).
See [github](https://github.com/vesoft-inc/nebula-console/tree/{{console.branch}}#compatibility-matrix).

## Obtain NebulaGraph Console

Expand Down
5 changes: 0 additions & 5 deletions docs-2.0/nebula-exchange/about-exchange/ex-ug-limitations.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,6 @@

This topic describes some of the limitations of using Exchange 3.x.


JAR packages are available in two ways: [compile them yourself](../ex-ug-compile.md) or download them from the Maven repository.

If you are using NebulaGraph 1.x, use [NebulaGraph Exchange 1.x](https://github.com/vesoft-inc/nebula-java/tree/v1.0/tools "Click to go to GitHub").

## Environment

Exchange 3.x supports the following operating systems:
Expand Down
2 changes: 1 addition & 1 deletion docs-2.0/nebula-exchange/ex-ug-compile.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This topic introduces how to get the JAR file of NebulaGraph Exchange.

The JAR file of Exchange Community Edition can be [downloaded](https://github.com/vesoft-inc/nebula-exchange/releases) directly.

To download Exchange Enterprise Edition, [get NebulaGraph Enterprise Edition Package](https://nebula-graph.io/pricing/) first.
To download Exchange Enterprise Edition, [contact us](https://www.nebula-graph.io/contact).

## Get the JAR file by compiling the source code

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@

This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local CSV files.

To import a local CSV file to NebulaGraph, see [NebulaGraph Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub").

## Data set

This topic takes the [basketballplayer dataset](https://docs-cdn.nebula-graph.com.cn/dataset/dataset.zip) as an example.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@

This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local ORC files.

To import a local ORC file to NebulaGraph, see [NebulaGraph Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub").

## Data set

This topic takes the [basketballplayer dataset](https://docs-cdn.nebula-graph.com.cn/dataset/dataset.zip) as an example.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@

This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local Parquet files.

To import a local Parquet file to NebulaGraph, see [NebulaGraph Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub").

## Data set

This topic takes the [basketballplayer dataset](https://docs-cdn.nebula-graph.com.cn/dataset/dataset.zip) as an example.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -481,7 +481,7 @@ Connect to the NebulaGraph database using the client tool and import the SST fil

- If there is a problem with the import and re-importing is required, re-execute `SUBMIT JOB INGEST;`.

### Step 6: (optional) Validate data
### Step 6: (Optional) Validate data

Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, NebulaGraph Studio). For example:

Expand All @@ -491,6 +491,6 @@ LOOKUP ON player YIELD id(vertex);

Users can also run the [`SHOW STATS`](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics.

### Step 7: (optional) Rebuild indexes in NebulaGraph
### Step 7: (Conditional) Rebuild indexes in NebulaGraph

With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md).
6 changes: 3 additions & 3 deletions docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Concept

NebulaGraph Operator is a tool to automate the deployment, operation, and maintenance of [NebulaGraph](https://github.com/vesoft-inc/nebula) clusters on [Kubernetes](https://kubernetes.io). Building upon the excellent scalability mechanism of Kubernetes, NebulaGraph introduced its operation and maintenance knowledge into the Kubernetes system, which makes NebulaGraph a real [cloud-native graph database](https://www.nebula-cloud.io/).
NebulaGraph Operator is a tool to automate the deployment, operation, and maintenance of [NebulaGraph](https://github.com/vesoft-inc/nebula) clusters on [Kubernetes](https://kubernetes.io). Building upon the excellent scalability mechanism of Kubernetes, NebulaGraph introduced its operation and maintenance knowledge into the Kubernetes system, which makes NebulaGraph a real cloud-native graph database.

![operator_map](https://docs-cdn.nebula-graph.com.cn/figures/operator_map_2022-09-08_18-55-18.png)

Expand All @@ -16,11 +16,11 @@ NebulaGraph Operator abstracts the deployment management of NebulaGraph clusters

The following features are already available in NebulaGraph Operator:

- **Deploy and uninstall clusters**: NebulaGraph Operator simplifies the process of deploying and uninstalling clusters for users. NebulaGraph Operator allows you to quickly create, update, or delete a NebulaGraph cluster by simply providing the corresponding CR file. For more information, see [Deploy NebulaGraph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md).
- **Cluster deployment and deletion**: NebulaGraph Operator simplifies the process of deploying and uninstalling clusters for users. NebulaGraph Operator allows you to quickly create, update, or delete a NebulaGraph cluster by simply providing the corresponding CR file. For more information, see [Deploy NebulaGraph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md).

{{ent.ent_begin}}

- **Scale clusters**: NebulaGraph Operator calls NebulaGraph's native scaling interfaces in a control loop to implement the scaling logic. You can simply perform scaling operations with YAML configurations and ensure the stability of data. For more information, see [Scale clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Scale clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md).
- **Cluster scaling**: NebulaGraph Operator calls NebulaGraph's native scaling interfaces in a control loop to implement the scaling logic. You can simply perform scaling operations with YAML configurations and ensure the stability of data. For more information, see [Scale clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Scale clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md).

- **Backup and Recovery**:NebulaGraph supports data backup and recovery. Users can use NebulaGraph Operator to backup the data of the NebulaGraph cluster to storage services that are compatible with the S3 protocol, and can also restore data to the cluster from the storage service. For details, see [Backup and restore using NebulaGraph Operator](10.backup-restore-using-operator.md).

Expand Down
9 changes: 4 additions & 5 deletions docs-2.0/nebula-operator/2.deploy-nebula-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Before installing NebulaGraph Operator, you need to install the following softwa

- If using a role-based access control policy, you need to enable [RBAC](https://kubernetes.io/docs/admin/authorization/rbac) (optional).

- [CoreDNS](https://coredns.io/) is a flexible and scalable DNS server that is [installed](https://github.com/coredns/deployment/tree/master/kubernetes) for Pods in NebulaGraph clusters.
- [CoreDNS](https://coredns.io/) is a flexible and scalable DNS server that is [installed](https://github.com/coredns/helm) for Pods in NebulaGraph clusters.

## Steps

Expand Down Expand Up @@ -52,8 +52,7 @@ Before installing NebulaGraph Operator, you need to install the following softwa
kubectl create namespace nebula-operator-system
```

- All the resources of NebulaGraph Operator are deployed in this namespace.
- You can also use a different name.
All the resources of NebulaGraph Operator are deployed in this namespace.

4. Install NebulaGraph Operator.

Expand Down Expand Up @@ -138,11 +137,11 @@ Part of the above parameters are described as follows:
| `imagePullSecrets` | - | The image pull secret in Kubernetes. |
| `kubernetesClusterDomain` | `cluster.local` | The cluster domain. |
| `controllerManager.create` | `true` | Whether to enable the controller-manager component. |
| `controllerManager.replicas` | `2` | The numeric value of controller-manager replicas. |
| `controllerManager.replicas` | `2` | The number of controller-manager replicas. |
| `admissionWebhook.create` | `false` | Whether to enable Admission Webhook. This option is disabled. To enable it, set the value to `true` and you will need to install [cert-manager](https://cert-manager.io/docs/installation/helm/). |
| `shceduler.create` | `true` | Whether to enable Scheduler. |
| `shceduler.schedulerName` | `nebula-scheduler` | The Scheduler name. |
| `shceduler.replicas` | `2` | The numeric value of nebula-scheduler replicas. |
| `shceduler.replicas` | `2` | The number of nebula-scheduler replicas. |

You can run `helm install [NAME] [CHART] [flags]` to specify chart configurations when installing a chart. For more information, see [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,17 +57,17 @@ The following example shows how to create a NebulaGraph cluster by creating a cl
| Parameter | Default value | Description |
| :---- | :--- | :--- |
| `metadata.name` | - | The name of the created NebulaGraph cluster. |
| `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. |
| `spec.graphd.replicas` | `1` | The number of replicas of the Graphd service. |
| `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. |
| `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. |
| `spec.graphd.service` | - | The Service configurations for the Graphd service. |
| `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. |
| `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. |
| `spec.metad.replicas` | `1` | The number of replicas of the Metad service. |
| `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. |
| `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. |
| `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. |
| `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.|
| `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. |
| `spec.storaged.replicas` | `3` | The number of replicas of the Storaged service. |
| `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. |
| `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. |
| `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.|
Expand Down Expand Up @@ -209,7 +209,7 @@ The following shows how to scale out a NebulaGraph cluster by changing the numbe

### Scale in clusters

The principle of scaling in a cluster is the same as scaling out a cluster. You scale in a cluster if the numeric value of the `replicas` in `apps_v1alpha1_nebulacluster.yaml` is changed smaller than the current number. For more information, see the **Scale out clusters** section above.
The principle of scaling in a cluster is the same as scaling out a cluster. You scale in a cluster if the number of the `replicas` in `apps_v1alpha1_nebulacluster.yaml` is changed smaller than the current number. For more information, see the **Scale out clusters** section above.

!!! caution

Expand Down
Loading

0 comments on commit 72a2dbb

Please sign in to comment.