diff --git a/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md b/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md index 81a5af8c482..270b75cbd6f 100644 --- a/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md +++ b/docs-2.0/backup-and-restore/nebula-br/1.what-is-br.md @@ -19,9 +19,8 @@ The BR has the following features. It supports: - Supports full backup, but not incremental backup. - Currently, NebulaGraph Listener and full-text indexes do not support backup. - If you back up data to the local disk, the backup files will be saved in the local path of each server. You can also mount the NFS on your host to restore the backup data to a different host. -- During the backup process, both DDL and DML statements in any specified graph spaces are blocked. We recommend that you do the operation within the low peak period of the business, for example, from 2:00 AM to 5:00 AM. -- The backup graph space can be restored to the original cluster only. Cross clusters restoration is not supported. Make sure the number of hosts in the cluster is not changed. Restoring a specified graph space will delete all other graph spaces in the cluster. - Restoration requires that the number of the storage servers in the original cluster is the same as that of the storage servers in the target cluster and storage server IPs must be the same. Restoring the specified space will clear all the remaining spaces in the cluster. +- During the backup process, both DDL and DML statements in any specified graph spaces are blocked. We recommend that you do the operation within the low peak period of the business, for example, from 2:00 AM to 5:00 AM. - During the restoration process, there is a time when NebulaGraph stops running. - Using BR in a container-based NebulaGraph cluster is not supported. diff --git a/docs-2.0/backup-and-restore/nebula-br/2.compile-br.md b/docs-2.0/backup-and-restore/nebula-br/2.compile-br.md index 3b093d4624d..12a2a608a31 100644 --- a/docs-2.0/backup-and-restore/nebula-br/2.compile-br.md +++ b/docs-2.0/backup-and-restore/nebula-br/2.compile-br.md @@ -109,7 +109,7 @@ In **each machine**, follow these steps: Before starting Agent, make sure that the Meta service has been started and Agent has read and write access to the corresponding NebulaGraph cluster directory and backup directory. ``` - sudo nohup ./agent --agent=":8888" --meta=":9559" > nebula_agent.log 2>&1 & + sudo nohup ./agent --agent=":8888" --meta=":9559" --ratelimit= > nebula_agent.log 2>&1 & ``` - `--agent`: The IP address and port number of Agent. diff --git a/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md b/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md index 86e40e8bc8f..20824f60f07 100644 --- a/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md +++ b/docs-2.0/backup-and-restore/nebula-br/4.br-restore-data.md @@ -12,11 +12,7 @@ If you use the BR to back up data, you can use it to restore the data to NebulaG ## Prerequisites -To restore data with the BR, do a check of these: - -- [Install BR and Agent](2.compile-br.md) and run Agent on each host in the cluster. - -- Download [nebula-agent](https://github.com/vesoft-inc/nebula-agent) and start the agent service in each cluster(including metad, storaged, graphd) host. +- [Install BR and Agent](2.compile-br.md) and run Agent on each host in the cluster. - No application is connected to the target NebulaGraph cluster. diff --git a/docs-2.0/nebula-console.md b/docs-2.0/nebula-console.md index 0f9bc46290b..48324a6d6f3 100644 --- a/docs-2.0/nebula-console.md +++ b/docs-2.0/nebula-console.md @@ -4,7 +4,7 @@ NebulaGraph Console is a native CLI client for NebulaGraph. It can be used to co ## Compatibility with NebulaGraph -See [github](https://github.com/vesoft-inc/nebula-console/tree/{{console.branch}}). +See [github](https://github.com/vesoft-inc/nebula-console/tree/{{console.branch}}#compatibility-matrix). ## Obtain NebulaGraph Console diff --git a/docs-2.0/nebula-exchange/about-exchange/ex-ug-limitations.md b/docs-2.0/nebula-exchange/about-exchange/ex-ug-limitations.md index 89b1997330d..24804b5f2f5 100644 --- a/docs-2.0/nebula-exchange/about-exchange/ex-ug-limitations.md +++ b/docs-2.0/nebula-exchange/about-exchange/ex-ug-limitations.md @@ -2,11 +2,6 @@ This topic describes some of the limitations of using Exchange 3.x. - -JAR packages are available in two ways: [compile them yourself](../ex-ug-compile.md) or download them from the Maven repository. - -If you are using NebulaGraph 1.x, use [NebulaGraph Exchange 1.x](https://github.com/vesoft-inc/nebula-java/tree/v1.0/tools "Click to go to GitHub"). - ## Environment Exchange 3.x supports the following operating systems: diff --git a/docs-2.0/nebula-exchange/ex-ug-compile.md b/docs-2.0/nebula-exchange/ex-ug-compile.md index a4634396a4d..3760f4e77c7 100644 --- a/docs-2.0/nebula-exchange/ex-ug-compile.md +++ b/docs-2.0/nebula-exchange/ex-ug-compile.md @@ -6,7 +6,7 @@ This topic introduces how to get the JAR file of NebulaGraph Exchange. The JAR file of Exchange Community Edition can be [downloaded](https://github.com/vesoft-inc/nebula-exchange/releases) directly. -To download Exchange Enterprise Edition, [get NebulaGraph Enterprise Edition Package](https://nebula-graph.io/pricing/) first. +To download Exchange Enterprise Edition, [contact us](https://www.nebula-graph.io/contact). ## Get the JAR file by compiling the source code diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-csv.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-csv.md index 0d6d15e212f..7fcb128afc0 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-csv.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-csv.md @@ -2,8 +2,6 @@ This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local CSV files. -To import a local CSV file to NebulaGraph, see [NebulaGraph Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub"). - ## Data set This topic takes the [basketballplayer dataset](https://docs-cdn.nebula-graph.com.cn/dataset/dataset.zip) as an example. diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-orc.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-orc.md index bc4d0798647..3b2ad1c3ed0 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-orc.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-orc.md @@ -2,8 +2,6 @@ This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local ORC files. -To import a local ORC file to NebulaGraph, see [NebulaGraph Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub"). - ## Data set This topic takes the [basketballplayer dataset](https://docs-cdn.nebula-graph.com.cn/dataset/dataset.zip) as an example. diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md index 58cbc3fe979..53641bf4e43 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-parquet.md @@ -2,8 +2,6 @@ This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local Parquet files. -To import a local Parquet file to NebulaGraph, see [NebulaGraph Importer](https://github.com/vesoft-inc/nebula-importer "Click to go to GitHub"). - ## Data set This topic takes the [basketballplayer dataset](https://docs-cdn.nebula-graph.com.cn/dataset/dataset.zip) as an example. diff --git a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-sst.md b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-sst.md index d89d6a2b407..e2a9a807529 100644 --- a/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-sst.md +++ b/docs-2.0/nebula-exchange/use-exchange/ex-ug-import-from-sst.md @@ -481,7 +481,7 @@ Connect to the NebulaGraph database using the client tool and import the SST fil - If there is a problem with the import and re-importing is required, re-execute `SUBMIT JOB INGEST;`. -### Step 6: (optional) Validate data +### Step 6: (Optional) Validate data Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, NebulaGraph Studio). For example: @@ -491,6 +491,6 @@ LOOKUP ON player YIELD id(vertex); Users can also run the [`SHOW STATS`](../../3.ngql-guide/7.general-query-statements/6.show/14.show-stats.md) command to view statistics. -### Step 7: (optional) Rebuild indexes in NebulaGraph +### Step 7: (Conditional) Rebuild indexes in NebulaGraph With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see [Index overview](../../3.ngql-guide/14.native-index-statements/README.md). diff --git a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md index 0f8cd626734..a88857c4afb 100644 --- a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md +++ b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md @@ -2,7 +2,7 @@ ## Concept -NebulaGraph Operator is a tool to automate the deployment, operation, and maintenance of [NebulaGraph](https://github.com/vesoft-inc/nebula) clusters on [Kubernetes](https://kubernetes.io). Building upon the excellent scalability mechanism of Kubernetes, NebulaGraph introduced its operation and maintenance knowledge into the Kubernetes system, which makes NebulaGraph a real [cloud-native graph database](https://www.nebula-cloud.io/). +NebulaGraph Operator is a tool to automate the deployment, operation, and maintenance of [NebulaGraph](https://github.com/vesoft-inc/nebula) clusters on [Kubernetes](https://kubernetes.io). Building upon the excellent scalability mechanism of Kubernetes, NebulaGraph introduced its operation and maintenance knowledge into the Kubernetes system, which makes NebulaGraph a real cloud-native graph database. ![operator_map](https://docs-cdn.nebula-graph.com.cn/figures/operator_map_2022-09-08_18-55-18.png) @@ -16,11 +16,11 @@ NebulaGraph Operator abstracts the deployment management of NebulaGraph clusters The following features are already available in NebulaGraph Operator: -- **Deploy and uninstall clusters**: NebulaGraph Operator simplifies the process of deploying and uninstalling clusters for users. NebulaGraph Operator allows you to quickly create, update, or delete a NebulaGraph cluster by simply providing the corresponding CR file. For more information, see [Deploy NebulaGraph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). +- **Cluster deployment and deletion**: NebulaGraph Operator simplifies the process of deploying and uninstalling clusters for users. NebulaGraph Operator allows you to quickly create, update, or delete a NebulaGraph cluster by simply providing the corresponding CR file. For more information, see [Deploy NebulaGraph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). {{ent.ent_begin}} -- **Scale clusters**: NebulaGraph Operator calls NebulaGraph's native scaling interfaces in a control loop to implement the scaling logic. You can simply perform scaling operations with YAML configurations and ensure the stability of data. For more information, see [Scale clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Scale clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). +- **Cluster scaling**: NebulaGraph Operator calls NebulaGraph's native scaling interfaces in a control loop to implement the scaling logic. You can simply perform scaling operations with YAML configurations and ensure the stability of data. For more information, see [Scale clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Scale clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). - **Backup and Recovery**:NebulaGraph supports data backup and recovery. Users can use NebulaGraph Operator to backup the data of the NebulaGraph cluster to storage services that are compatible with the S3 protocol, and can also restore data to the cluster from the storage service. For details, see [Backup and restore using NebulaGraph Operator](10.backup-restore-using-operator.md). diff --git a/docs-2.0/nebula-operator/2.deploy-nebula-operator.md b/docs-2.0/nebula-operator/2.deploy-nebula-operator.md index 7ef7ed7729d..3fff8124318 100644 --- a/docs-2.0/nebula-operator/2.deploy-nebula-operator.md +++ b/docs-2.0/nebula-operator/2.deploy-nebula-operator.md @@ -20,7 +20,7 @@ Before installing NebulaGraph Operator, you need to install the following softwa - If using a role-based access control policy, you need to enable [RBAC](https://kubernetes.io/docs/admin/authorization/rbac) (optional). - - [CoreDNS](https://coredns.io/) is a flexible and scalable DNS server that is [installed](https://github.com/coredns/deployment/tree/master/kubernetes) for Pods in NebulaGraph clusters. + - [CoreDNS](https://coredns.io/) is a flexible and scalable DNS server that is [installed](https://github.com/coredns/helm) for Pods in NebulaGraph clusters. ## Steps @@ -52,8 +52,7 @@ Before installing NebulaGraph Operator, you need to install the following softwa kubectl create namespace nebula-operator-system ``` - - All the resources of NebulaGraph Operator are deployed in this namespace. - - You can also use a different name. + All the resources of NebulaGraph Operator are deployed in this namespace. 4. Install NebulaGraph Operator. @@ -138,11 +137,11 @@ Part of the above parameters are described as follows: | `imagePullSecrets` | - | The image pull secret in Kubernetes. | | `kubernetesClusterDomain` | `cluster.local` | The cluster domain. | | `controllerManager.create` | `true` | Whether to enable the controller-manager component. | -| `controllerManager.replicas` | `2` | The numeric value of controller-manager replicas. | +| `controllerManager.replicas` | `2` | The number of controller-manager replicas. | | `admissionWebhook.create` | `false` | Whether to enable Admission Webhook. This option is disabled. To enable it, set the value to `true` and you will need to install [cert-manager](https://cert-manager.io/docs/installation/helm/). | | `shceduler.create` | `true` | Whether to enable Scheduler. | | `shceduler.schedulerName` | `nebula-scheduler` | The Scheduler name. | -| `shceduler.replicas` | `2` | The numeric value of nebula-scheduler replicas. | +| `shceduler.replicas` | `2` | The number of nebula-scheduler replicas. | You can run `helm install [NAME] [CHART] [flags]` to specify chart configurations when installing a chart. For more information, see [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index cca2ded3c04..0633fcc566d 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -57,17 +57,17 @@ The following example shows how to create a NebulaGraph cluster by creating a cl | Parameter | Default value | Description | | :---- | :--- | :--- | | `metadata.name` | - | The name of the created NebulaGraph cluster. | - | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | + | `spec.graphd.replicas` | `1` | The number of replicas of the Graphd service. | | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | | `spec.graphd.service` | - | The Service configurations for the Graphd service. | | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | - | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | + | `spec.metad.replicas` | `1` | The number of replicas of the Metad service. | | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| - | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | + | `spec.storaged.replicas` | `3` | The number of replicas of the Storaged service. | | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| @@ -209,7 +209,7 @@ The following shows how to scale out a NebulaGraph cluster by changing the numbe ### Scale in clusters -The principle of scaling in a cluster is the same as scaling out a cluster. You scale in a cluster if the numeric value of the `replicas` in `apps_v1alpha1_nebulacluster.yaml` is changed smaller than the current number. For more information, see the **Scale out clusters** section above. +The principle of scaling in a cluster is the same as scaling out a cluster. You scale in a cluster if the number of the `replicas` in `apps_v1alpha1_nebulacluster.yaml` is changed smaller than the current number. For more information, see the **Scale out clusters** section above. !!! caution diff --git a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md index b3ce3015bf9..ec1fef68943 100644 --- a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md +++ b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md @@ -1,4 +1,4 @@ -# Connect to NebulaGraph databases with Nebular Operator +# Connect to NebulaGraph databases with Operator After creating a NebulaGraph cluster with NebulaGraph Operator on Kubernetes, you can connect to NebulaGraph databases from within the cluster and outside the cluster. @@ -6,18 +6,12 @@ After creating a NebulaGraph cluster with NebulaGraph Operator on Kubernetes, yo Create a NebulaGraph cluster with NebulaGraph Operator on Kubernetes. For more information, see [Deploy NebulaGraph clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). -## Connect to NebulaGraph databases from outside a NebulaGraph cluster via `NodePort` - -You can create a `NodePort` type Service to access internal cluster services from outside the cluster using any node IP and the exposed node port. You can also utilize load balancing services provided by cloud vendors (such as Azure, AWS, etc.) by setting the Service type to `LoadBalancer`. This allows external access to internal cluster services through the public IP and port of the load balancer provided by the cloud vendor. - -The Service of type `NodePort` forwards the front-end requests via the label selector `spec.selector` to Graphd pods with labels `app.kubernetes.io/cluster: ` and `app.kubernetes.io/component: graphd`. - -After creating a NebulaGraph cluster based on the [example template](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml), where `spec.graphd.service.type=NodePort`, the NebulaGraph Operator will automatically create a NodePort type Service named `-graphd-svc` in the same namespace. You can directly connect to the NebulaGraph database through any node IP and the exposed node port (see step 4 below). You can also create a custom Service according to your needs. - -Steps: +## Connect to NebulaGraph databases from within a NebulaGraph cluster -1. Create a YAML file named `graphd-nodeport-service.yaml`. The file contents are as follows: +You can also create a `ClusterIP` type Service to provide an access point to the NebulaGraph database for other Pods within the cluster. By using the Service's IP and the Graph service's port number (9669), you can connect to the NebulaGraph database. For more information, see [ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/). +1. Create a file named `graphd-clusterip-service.yaml`. The file contents are as follows: + ```yaml apiVersion: v1 kind: Service @@ -27,7 +21,7 @@ Steps: app.kubernetes.io/component: graphd app.kubernetes.io/managed-by: nebula-operator app.kubernetes.io/name: nebula-graph - name: nebula-graphd-svc-nodeport + name: nebula-graphd-svc namespace: default spec: externalTrafficPolicy: Local @@ -45,63 +39,73 @@ Steps: app.kubernetes.io/component: graphd app.kubernetes.io/managed-by: nebula-operator app.kubernetes.io/name: nebula-graph - type: NodePort # Set the type to NodePort. + type: ClusterIP # Set the type to ClusterIP. ``` - + - NebulaGraph uses port `9669` by default. `19669` is the HTTP port of the Graph service in a NebulaGraph cluster. - - The value of `targetPort` is the port mapped to the database Pods, which can be customized. + - `targetPort` is the port mapped to the database Pods, which can be customized. + +2. Create a ClusterIP Service. -2. Run the following command to create a NodePort Service. + ```bash + kubectl create -f graphd-clusterip-service.yaml + ``` +3. Check the IP of the Service: + ```bash - kubectl create -f graphd-nodeport-service.yaml + $ kubectl get service -l app.kubernetes.io/cluster= # is the name of your NebulaGraph cluster. + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + nebula-graphd-svc ClusterIP 10.98.213.34 9669/TCP,19669/TCP,19670/TCP 23h + ... ``` -3. Check the port mapped on all of your cluster nodes. +4. Run the following command to connect to the NebulaGraph database using the IP of the `-graphd-svc` Service above: ```bash - kubectl get services -l app.kubernetes.io/cluster= # is the name of your NebulaGraph cluster. + kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p ``` - Output: + For example: ```bash - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - nebula-graphd-svc-nodeport NodePort 10.107.153.129 9669:32236/TCP,19669:31674/TCP,19670:31057/TCP 24h - ... - ``` - - As you see, the mapped port of NebulaGraph databases on all cluster nodes is `32236`. + kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft -4. Connect to NebulaGraph databases with your node IP and the node port above. - - ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p - ``` + - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. + - ``: The custom Pod name. + - `-addr`: The IP of the `ClusterIP` Service, used to connect to Graphd services. + - `-port`: The port to connect to Graphd services, the default port of which is `9669`. + - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. + - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. - For example: + A successful connection to the database is indicated if the following is returned: ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 192.168.8.24 -port 32236 -u root -p vesoft If you don't see a command prompt, try pressing enter. (root@nebula) [(none)]> ``` - - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. - - ``: The custom Pod name. The above example uses `nebula-console`. - - `-addr`: The IP of any node in a NebulaGraph cluster. The above example uses `192.168.8.24`. - - `-port`: The mapped port of NebulaGraph databases on all cluster nodes. The above example uses `32236`. - - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. - - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. - +You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. -## Connect to NebulaGraph databases from within a NebulaGraph cluster +```bash +kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p +``` -You can also create a `ClusterIP` type Service to provide an access point to the NebulaGraph database for other Pods within the cluster. By using the Service's IP and the Graph service's port number (9669), you can connect to the NebulaGraph database. For more information, see [ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/). +`service_port` is the port to connect to Graphd services, the default port of which is `9669`. + +## Connect to NebulaGraph databases from outside a NebulaGraph cluster via `NodePort` + +You can create a `NodePort` type Service to access internal cluster services from outside the cluster using any node IP and the exposed node port. You can also utilize load balancing services provided by cloud vendors (such as Azure, AWS, etc.) by setting the Service type to `LoadBalancer`. This allows external access to internal cluster services through the public IP and port of the load balancer provided by the cloud vendor. + +The Service of type `NodePort` forwards the front-end requests via the label selector `spec.selector` to Graphd pods with labels `app.kubernetes.io/cluster: ` and `app.kubernetes.io/component: graphd`. + +After creating a NebulaGraph cluster based on the [example template](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml), where `spec.graphd.service.type=NodePort`, the NebulaGraph Operator will automatically create a NodePort type Service named `-graphd-svc` in the same namespace. You can directly connect to the NebulaGraph database through any node IP and the exposed node port (see step 4 below). You can also create a custom Service according to your needs. + +Steps: + +1. Create a YAML file named `graphd-nodeport-service.yaml`. The file contents are as follows: -1. Create a file named `graphd-clusterip-service.yaml`. The file contents are as follows: - ```yaml apiVersion: v1 kind: Service @@ -111,7 +115,7 @@ You can also create a `ClusterIP` type Service to provide an access point to the app.kubernetes.io/component: graphd app.kubernetes.io/managed-by: nebula-operator app.kubernetes.io/name: nebula-graph - name: nebula-graphd-svc + name: nebula-graphd-svc-nodeport namespace: default spec: externalTrafficPolicy: Local @@ -129,60 +133,55 @@ You can also create a `ClusterIP` type Service to provide an access point to the app.kubernetes.io/component: graphd app.kubernetes.io/managed-by: nebula-operator app.kubernetes.io/name: nebula-graph - type: ClusterIP # Set the type to ClusterIP. + type: NodePort # Set the type to NodePort. ``` - + - NebulaGraph uses port `9669` by default. `19669` is the HTTP port of the Graph service in a NebulaGraph cluster. - - `targetPort` is the port mapped to the database Pods, which can be customized. - -2. Create a ClusterIP Service. + - The value of `targetPort` is the port mapped to the database Pods, which can be customized. - ```bash - kubectl create -f graphd-clusterip-service.yaml - ``` +2. Run the following command to create a NodePort Service. -3. Check the IP of the Service: - ```bash - $ kubectl get service -l app.kubernetes.io/cluster= # is the name of your NebulaGraph cluster. - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - nebula-graphd-svc ClusterIP 10.98.213.34 9669/TCP,19669/TCP,19670/TCP 23h - ... + kubectl create -f graphd-nodeport-service.yaml ``` -4. Run the following command to connect to the NebulaGraph database using the IP of the `-graphd-svc` Service above: +3. Check the port mapped on all of your cluster nodes. ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p + kubectl get services -l app.kubernetes.io/cluster= # is the name of your NebulaGraph cluster. ``` - For example: + Output: ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + nebula-graphd-svc-nodeport NodePort 10.107.153.129 9669:32236/TCP,19669:31674/TCP,19670:31057/TCP 24h + ... + ``` - - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. - - ``: The custom Pod name. - - `-addr`: The IP of the `ClusterIP` Service, used to connect to Graphd services. - - `-port`: The port to connect to Graphd services, the default port of which is `9669`. - - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. - - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. + As you see, the mapped port of NebulaGraph databases on all cluster nodes is `32236`. - A successful connection to the database is indicated if the following is returned: +4. Connect to NebulaGraph databases with your node IP and the node port above. + + ```bash + kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -port -u -p + ``` + + For example: ```bash + kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 192.168.8.24 -port 32236 -u root -p vesoft If you don't see a command prompt, try pressing enter. (root@nebula) [(none)]> ``` -You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. - -```bash -kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p -``` - -`service_port` is the port to connect to Graphd services, the default port of which is `9669`. + - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. + - ``: The custom Pod name. The above example uses `nebula-console`. + - `-addr`: The IP of any node in a NebulaGraph cluster. The above example uses `192.168.8.24`. + - `-port`: The mapped port of NebulaGraph databases on all cluster nodes. The above example uses `32236`. + - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. + - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. ## Connect to NebulaGraph databases from outside a NebulaGraph cluster via Ingress diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md index 177c9578243..5862a9cfa7c 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md @@ -12,7 +12,7 @@ You have created a cluster. For how to create a cluster with Kubectl, see [Creat The following example uses a cluster named `nebula` and the cluster's configuration file named `nebula_cluster.yaml` to show how to set `enablePVReclaim`: -1. Run the following command to access the edit page of the `nebula` cluster. +1. Run the following command to edit the `nebula` cluster's configuration file. ```bash kubectl edit nebulaclusters.apps.nebula-graph.io nebula diff --git a/docs-2.0/nebula-studio/about-studio/st-ug-limitations.md b/docs-2.0/nebula-studio/about-studio/st-ug-limitations.md index ac4ab0225f9..ebec57d204a 100644 --- a/docs-2.0/nebula-studio/about-studio/st-ug-limitations.md +++ b/docs-2.0/nebula-studio/about-studio/st-ug-limitations.md @@ -46,4 +46,4 @@ For more information about the preceding statements, see [User management](../.. ## Browser -We recommend that you use the latest version of Chrome to get access to Studio. +We recommend that you use the latest version of Chrome to get access to Studio. Otherwise, some features may not work properly. diff --git a/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md b/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md index d57075d818a..182008e342f 100644 --- a/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md +++ b/docs-2.0/nebula-studio/about-studio/st-ug-what-is-graph-studio.md @@ -6,9 +6,9 @@ NebulaGraph Studio (Studio in short) is a browser-based visualization tool to ma You can also try some functions [online](https://playground.nebula-graph.io/explorer) in Studio. -## Released versions +## Deployment -In addition to deploying Studio with RPM-based, DEB-based, or Tar-based package, or with Docker. You can also deploy Studio with Helm in the Kubernetes cluster. For more information, see [Deploy Studio](../deploy-connect/st-ug-deploy.md). +In addition to deploying Studio with RPM-based, DEB-based, or Tar-based packages, or with Docker, you can also deploy Studio with Helm in the Kubernetes cluster. For more information, see [Deploy Studio](../deploy-connect/st-ug-deploy.md). -If you want to reset NebulaGraph, you can log out and reconfigure the database. +If you want to reconnect to NebulaGraph, you can log out and reconfigure the database. Click the user profile picture in the upper right corner, and choose **Log out**. \ No newline at end of file diff --git a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-edge-type.md b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-edge-type.md index fddbe5cc982..505a755550c 100644 --- a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-edge-type.md +++ b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-edge-type.md @@ -1,4 +1,4 @@ -# Operate edge types +# Manage edge types After a graph space is created in NebulaGraph, you can create edge types. With Studio, you can choose to use the **Console** page or the **Schema** page to create, retrieve, update, or delete edge types. This topic introduces how to use the **Schema** page to operate edge types in a graph space only. diff --git a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-index.md b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-index.md index a4ca58a9c60..b1313f3b2e4 100644 --- a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-index.md +++ b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-index.md @@ -1,4 +1,4 @@ -# Operate Indexes +# Manage indexes You can create an index for a Tag and/or an Edge type. An index lets traversal start from vertices or edges with the same property and it can make a query more efficient. With Studio, you can use the **Console** page or the **Schema** page to create, retrieve, and delete indexes. This topic introduces how to use the **Schema** page to operate an index only. diff --git a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-space.md b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-space.md index 17ec57db191..9920237cc89 100644 --- a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-space.md +++ b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-space.md @@ -1,4 +1,4 @@ -# Operate graph spaces +# Manage graph spaces When Studio is connected to NebulaGraph, you can create or delete a graph space. You can use the **Console** page or the **Schema** page to do these operations. This article only introduces how to use the **Schema** page to operate graph spaces in NebulaGraph. @@ -17,7 +17,7 @@ To operate a graph space on the **Schema** page of Studio, you must do a check o 2. In the **Graph Space List** page, click **Create Space**, do these settings: - - **Name**: Specify a name to the new graph space. In this example, `basketballplayer` is used. The name must be distinct in the database. + - **Name**: Specify a name to the new graph space. In this example, `basketballplayer` is used. The name must be unique in the database. - **Vid Type**: The data types of VIDs are restricted to `FIXED_STRING()` or `INT64`. A graph space can only select one VID type. In this example, `FIXED_STRING(32)` is used. For more information, see [VID](../../1.introduction/3.vid.md). diff --git a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-tag.md b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-tag.md index ac65573ec0a..3b865f3bc0a 100644 --- a/docs-2.0/nebula-studio/manage-schema/st-ug-crud-tag.md +++ b/docs-2.0/nebula-studio/manage-schema/st-ug-crud-tag.md @@ -1,4 +1,4 @@ -# Operate tags +# Manage tags After a graph space is created in NebulaGraph, you can create tags. With Studio, you can use the **Console** page or the **Schema** page to create, retrieve, update, or delete tags. This topic introduces how to use the **Schema** page to operate tags in a graph space only. diff --git a/mkdocs.yml b/mkdocs.yml index 021e1193377..a299497c75c 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -577,11 +577,11 @@ nav: - Import data: nebula-studio/quick-start/st-ug-import-data.md - Use Console: nebula-studio/quick-start/st-ug-console.md - Use Schema: - - Operate graph spaces: nebula-studio/manage-schema/st-ug-crud-space.md - - Operate Tags: nebula-studio/manage-schema/st-ug-crud-tag.md - - Operate Edge types: nebula-studio/manage-schema/st-ug-crud-edge-type.md - - Operate Indexes: nebula-studio/manage-schema/st-ug-crud-index.md - - View Schema: nebula-studio/manage-schema/st-ug-view-schema.md + - Manage graph spaces: nebula-studio/manage-schema/st-ug-crud-space.md + - Manage tags: nebula-studio/manage-schema/st-ug-crud-tag.md + - Manage edge types: nebula-studio/manage-schema/st-ug-crud-edge-type.md + - Manage indexes: nebula-studio/manage-schema/st-ug-crud-index.md + - View schema: nebula-studio/manage-schema/st-ug-view-schema.md - Schema drafting: nebula-studio/quick-start/draft.md - Troubleshooting: - Database connection error: nebula-studio/troubleshooting/st-ug-config-server-errors.md