diff --git a/docs-2.0-en/nebula-operator/1.introduction-to-nebula-operator.md b/docs-2.0-en/nebula-operator/1.introduction-to-nebula-operator.md index aa737a098d8..5b177c13c3c 100644 --- a/docs-2.0-en/nebula-operator/1.introduction-to-nebula-operator.md +++ b/docs-2.0-en/nebula-operator/1.introduction-to-nebula-operator.md @@ -32,7 +32,7 @@ NebulaGraph Operator does not support the v1.x version of NebulaGraph. NebulaGra | NebulaGraph | NebulaGraph Operator | | ------------- | -------------------- | -| 3.5.x ~ 3.6.0 | 1.5.0 ~ 1.7.x | +| 3.5.x ~ 3.6.0 | 1.5.0 ~ 1.7.x | | 3.0.0 ~ 3.4.1 | 1.3.0, 1.4.0 ~ 1.4.2 | | 3.0.0 ~ 3.3.x | 1.0.0, 1.1.0, 1.2.0 | | 2.5.x ~ 2.6.x | 0.9.0 | @@ -43,10 +43,6 @@ NebulaGraph Operator does not support the v1.x version of NebulaGraph. NebulaGra - The 1.x version NebulaGraph Operator is not compatible with NebulaGraph of version below v3.x. - Starting from NebulaGraph Operator 0.9.0, logs and data are stored separately. Using NebulaGraph Operator 0.9.0 or later versions to manage a NebulaGraph 2.5.x cluster created with Operator 0.8.0 can cause compatibility issues. You can backup the data of the NebulaGraph 2.5.x cluster and then create a 2.6.x cluster with Operator 0.9.0. -### Feature limitations - -The NebulaGraph Operator scaling feature is only available for the Enterprise Edition of NebulaGraph clusters and does not support scaling the Community Edition version of NebulaGraph clusters. - ## Release note [Release](https://github.com/vesoft-inc/nebula-operator/releases/tag/{{operator.tag}}) diff --git a/docs-2.0-en/nebula-operator/2.deploy-nebula-operator.md b/docs-2.0-en/nebula-operator/2.deploy-nebula-operator.md index 0017f4e072f..36ea5062475 100644 --- a/docs-2.0-en/nebula-operator/2.deploy-nebula-operator.md +++ b/docs-2.0-en/nebula-operator/2.deploy-nebula-operator.md @@ -18,7 +18,7 @@ Before installing NebulaGraph Operator, you need to install the following softwa !!! note - - If using a role-based access control policy, you need to enable [RBAC](https://kubernetes.io/docs/admin/authorization/rbac) (optional). + - If using a role-based access control policy, you need to enable [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (optional). - [CoreDNS](https://coredns.io/) is a flexible and scalable DNS server that is [installed](https://github.com/coredns/helm) for Pods in NebulaGraph clusters. diff --git a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index 28f3f501689..8ecfda885e7 100644 --- a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -21,41 +21,41 @@ The following example shows how to create a NebulaGraph cluster by creating a cl ``` 2. Create a file named `apps_v1alpha1_nebulacluster.yaml`. - - - To create a NebulaGraph Community cluster - See [community cluster configurations](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). - - ??? Info "Expand to show parameter descriptions of community clusters" - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - | `metadata.name` | - | The name of the created NebulaGraph cluster. | - |`spec.console`|-| Configuration of the Console service. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).| - | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | - | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | - | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | - | `spec.graphd.service` | - | The Service configurations for the Graphd service. | - | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | - | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | - | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | - | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | - | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | - | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| - | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | - | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | - | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | - | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| - | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | - | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| - | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | - |`spec..securityContext`|`{}`|Defines privilege and access control settings for NebulaGraph service containers. For details, see [SecurityContext](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/doc/user/security_context.md). | - |`spec.agent`|`{}`| Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used.| - | `spec.reference.name` | - | The name of the dependent controller. | - | `spec.schedulerName` | - | The scheduler name. | - | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | - |`spec.logRotate`| - |Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md).| - |`spec.enablePVReclaim`|`false`|Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md).| + See [community cluster configurations](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/nebulacluster.yaml). + + The following table describes the parameters in the sample configuration file. + + | Parameter | Default value | Description | + | :---------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | `metadata.name` | - | The name of the created NebulaGraph cluster. | + | `spec.console` | - | Configuration of the Console service. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). | + | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | + | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | + | `spec.graphd.version` | `v3.6.0` | The version of the Graphd service. | + | `spec.graphd.service` | - | The Service configurations for the Graphd service. | + | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | + | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | + | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | + | `spec.metad.version` | `v3.6.0` | The version of the Metad service. | + | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | + | `spec.metad.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Metad service. | + | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | + | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | + | `spec.storaged.version` | `v3.6.0` | The version of the Storaged service. | + | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc. | + | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | + | `spec.storaged.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Storaged service. | + | `spec.storaged.enableAutoBalance` | `true` | Whether to balance data automatically. | + | `spec..securityContext` | `{}` | Defines privilege and access control settings for NebulaGraph service containers. For details, see [SecurityContext](https://github.com/vesoft-inc/nebula-operator/blob/release-1.5/doc/user/security_context.md). | + | `spec.agent` | `{}` | Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used. | + | `spec.reference.name` | - | The name of the dependent controller. | + | `spec.schedulerName` | - | The scheduler name. | + | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | + | `spec.logRotate` | - | Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md). | + | `spec.enablePVReclaim` | `false` | Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md). | + | | | | + 3. Create a NebulaGraph cluster. @@ -84,7 +84,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl ## Scaling clusters -- The cluster scaling feature is for NebulaGraph Enterprise Edition only. +The cluster scaling feature is for NebulaGraph Enterprise Edition only. ## Delete clusters diff --git a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index 5021f367068..79a7a95aa53 100644 --- a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -65,23 +65,9 @@ kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" get pod -l "app.kubernetes.io/cluster=${NEBULA_CLUSTER_NAME}" ``` - Output: - - ```bash - NAME READY STATUS RESTARTS AGE - nebula-graphd-0 1/1 Running 0 5m34s - nebula-graphd-1 1/1 Running 0 5m34s - nebula-metad-0 1/1 Running 0 5m34s - nebula-metad-1 1/1 Running 0 5m34s - nebula-metad-2 1/1 Running 0 5m34s - nebula-storaged-0 1/1 Running 0 5m34s - nebula-storaged-1 1/1 Running 0 5m34s - nebula-storaged-2 1/1 Running 0 5m34s - ``` - ## Scaling clusters -- The cluster scaling feature is for NebulaGraph Enterprise Edition only. +The cluster scaling feature is for NebulaGraph Enterprise Edition only. ## Delete clusters diff --git a/docs-2.0-zh/nebula-operator/1.introduction-to-nebula-operator.md b/docs-2.0-zh/nebula-operator/1.introduction-to-nebula-operator.md index bf958948ae8..c186c7f0be3 100644 --- a/docs-2.0-zh/nebula-operator/1.introduction-to-nebula-operator.md +++ b/docs-2.0-zh/nebula-operator/1.introduction-to-nebula-operator.md @@ -1,7 +1,5 @@ # 什么是 NebulaGraph Operator - - ## 基本概念 NebulaGraph Operator 是用于在 [Kubernetes](https://kubernetes.io) 系统上自动化部署和运维 [NebulaGraph](https://github.com/vesoft-inc/nebula) 集群的工具。依托于 Kubernetes 扩展机制,{{nebula.name}}将其运维领域的知识全面注入至 Kubernetes 系统中,让{{nebula.name}}成为真正的云原生图数据库。 @@ -10,7 +8,7 @@ NebulaGraph Operator 是用于在 [Kubernetes](https://kubernetes.io) 系统上 ## 工作原理 -对于 Kubernetes 系统内不存在的资源类型,用户可以通过添加自定义 API 对象的方式注册,常见的方法是使用 [CustomResourceDefinition(CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) 。 +对于 Kubernetes 系统内不存在的资源类型,用户可以通过添加自定义 API 对象的方式注册,常见的方法是使用 [CustomResourceDefinition(CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions)。 NebulaGraph Operator 将{{nebula.name}}集群的部署管理抽象为 CRD。通过结合多个内置的 API 对象,包括 StatefulSet、Service 和 ConfigMap,{{nebula.name}}集群的日常管理和维护被编码为一个控制循环。在 Kubernetes 系统内,每一种内置资源对象,都运行着一个特定的控制循环,将它的实际状态通过事先规定好的编排动作,逐步调整为最终的期望状态。当一个 CR 实例被提交时,NebulaGraph Operator 会根据控制流程驱动数据库集群进入最终状态。 @@ -21,7 +19,6 @@ NebulaGraph Operator 已具备的功能如下: - **集群创建和卸载**:NebulaGraph Operator 简化了用户部署和卸载集群的过程。用户只需提供对应的 CR 文件,NebulaGraph Operator 即可快速创建或者删除一个对应的{{nebula.name}}集群。更多信息参考[使用 Kubectl 部署{{nebula.name}}集群](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md)或者[使用 Helm 部署{{nebula.name}}集群](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md)。 - - **集群升级**:支持升级 {{operator.upgrade_from}} 版的{{nebula.name}}集群至 {{operator.upgrade_to}} 版。 - **故障自愈**:NebulaGraph Operator 调用{{nebula.name}}集群提供的接口,动态地感知服务状态。一旦发现异常,NebulaGraph Operator 自动进行容错处理。更多信息参考[故障自愈](5.operator-failover.md)。 diff --git a/docs-2.0-zh/nebula-operator/2.deploy-nebula-operator.md b/docs-2.0-zh/nebula-operator/2.deploy-nebula-operator.md index 0cdf4ee5cc4..6b93c3f5015 100644 --- a/docs-2.0-zh/nebula-operator/2.deploy-nebula-operator.md +++ b/docs-2.0-zh/nebula-operator/2.deploy-nebula-operator.md @@ -18,7 +18,7 @@ !!! note - - 如果使用基于角色的访问控制的策略,用户需开启 [RBAC](https://kubernetes.io/docs/admin/authorization/rbac)(可选)。 + - 如果使用基于角色的访问控制的策略,用户需开启 [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)(可选)。 - [CoreDNS](https://coredns.io/) 是一个灵活的、可扩展的 DNS 服务器,被[安装](https://github.com/coredns/helm)在集群内作为集群内 Pods 的 DNS 服务器。{{nebula.name}}集群中的每个组件通过 DNS 解析类似`x.default.svc.cluster.local`这样的域名相互通信。 ## 操作步骤 @@ -163,7 +163,7 @@ helm install nebula-operator nebula-operator/nebula-operator --namespace=