Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

custom-conf-parameters&pv-claim&balance data #1176

Merged
merged 16 commits into from
Nov 12, 2021
Merged
7 changes: 5 additions & 2 deletions docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,11 @@ Nebula Operator已具备的功能如下:

- **集群扩容和缩容**:通过在控制循环中调用Nebula Graph原生提供的扩缩容接口,Nebula Graph封装Nebula Operator实现了扩缩容的逻辑,用户可以通过YAML配置进行简单的扩缩容,且保证数据的稳定性。更多信息参考[使用Kubeclt扩缩容集群](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md#_3)或[使用Helm扩缩容集群](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md#_2)。

- **集群升级**:支持升级2.5.x版的Nebula Graph集群至2.6.x版。

- **故障自愈**:Nebula Operator调用Nebula Graph集群提供的接口,动态地感知服务状态。一旦发现异常,Nebula Operator自动进行容错处理。更多信息参考[故障自愈](5.operator-failover.md)。

- **均衡调度**: 基于调度器扩展接口,Nebula Operator提供的调度器可以将应用Pods均匀地分布在Nebula Graph集群中。
- **均衡调度**基于调度器扩展接口,Nebula Operator提供的调度器可以将应用Pods均匀地分布在Nebula Graph集群中。

## 使用限制

Expand All @@ -30,7 +32,8 @@ Nebula Operator不支持v1.x版本的Nebula Graph,其与Nebula Graph版本的

| Nebula Operator版本 | Nebula Graph版本 |
| ------------------- | ---------------- |
| {{operator.release}}| {{nebula.release}} |
| {{operator.release}}| 2.5.x ~ 2.6.x |
|0.8.0|2.5.x|

### 功能限制

Expand Down
19 changes: 12 additions & 7 deletions docs-2.0/nebula-operator/2.deploy-nebula-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,17 +85,17 @@
示例如下:

```yaml
[abby@master ~]$ helm show values nebula-operator/nebula-operator
[abby@master ~]$ helm show values nebula-operator/nebula-operator
image:
nebulaOperator:
image: vesoft/nebula-operator:v0.8.0
imagePullPolicy: IfNotPresent
image: vesoft/nebula-operator:latest
imagePullPolicy: Always
kubeRBACProxy:
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
imagePullPolicy: IfNotPresent
imagePullPolicy: Always
kubeScheduler:
image: k8s.gcr.io/kube-scheduler:v1.18.8
imagePullPolicy: IfNotPresent
imagePullPolicy: Always

imagePullSecrets: []
kubernetesClusterDomain: ""
Expand Down Expand Up @@ -129,11 +129,11 @@ scheduler:
memory: 20Mi
```

`values.yaml`中参数描述如下
部分参数描述如下

| 参数 | 默认值 | 描述 |
| :------------------------------------- | :------------------------------ | :----------------------------------------- |
| `image.nebulaOperator.image` | `vesoft/nebula-operator:v0.8.0` | Nebula Operator的镜像,版本为v0.8.0。 |
| `image.nebulaOperator.image` | `vesoft/nebula-operator:latest` | Nebula Operator的镜像,版本为{{operator.branch}}。 |
| `image.nebulaOperator.imagePullPolicy` | `IfNotPresent` | 镜像拉取策略。 |
| `imagePullSecrets` | - | 镜像拉取密钥。 |
| `kubernetesClusterDomain` | `cluster.local`。 | 集群域名。 |
Expand Down Expand Up @@ -173,6 +173,11 @@ helm install nebula-operator nebula-operator/nebula-operator --namespace=<nebula
```

`<nebula-operator-system>`为用户创建的命名空间,nebula-operator相关Pods在此命名空间下。

<!--代码freeze 后需要补充
## 升级Nebula Operator

-->

### 卸载Nebula Operator

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,11 @@
memory: "1Gi"
replicas: 1
image: vesoft/nebula-graphd
version: v2.5.1
version: {{nebula.branch}}
service:
type: NodePort
externalTrafficPolicy: Local
storageClaim:
logVolumeClaim:
resources:
requests:
storage: 2Gi
Expand All @@ -47,8 +47,13 @@
memory: "1Gi"
replicas: 1
image: vesoft/nebula-metad
version: v2.5.1
storageClaim:
version: {{nebula.branch}}
dataVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: gp2
logVolumeClaim:
resources:
requests:
storage: 2Gi
Expand All @@ -63,8 +68,13 @@
memory: "1Gi"
replicas: 3
image: vesoft/nebula-storaged
version: v2.5.1
storageClaim:
version: {{nebula.branch}}
dataVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: gp2
logVolumeClaim:
resources:
requests:
storage: 2Gi
Expand All @@ -73,7 +83,7 @@
name: statefulsets.apps
version: v1
schedulerName: default-scheduler
imagePullPolicy: IfNotPresent
imagePullPolicy: Always
```

参数描述如下:
Expand All @@ -83,23 +93,25 @@
| `metadata.name` | - | 创建的Nebula Graph集群名称。 |
| `spec.graphd.replicas` | `1` | Graphd服务的副本数。 |
| `spec.graphd.images` | `vesoft/nebula-graphd` | Graphd服务的容器镜像。 |
| `spec.graphd.version` | `v2.5.1` | Graphd服务的版本号。 |
| `spec.graphd.version` | `{{nebula.branch}}` | Graphd服务的版本号。 |
| `spec.graphd.service` | - | Graphd服务Service配置。 |
| `spec.graphd.storageClaim` | - | Graphd服务存储配置。 |
| `spec.graphd.logVolumeClaim.storageClassName` | - | Graphd服务的日志盘存储配置。 |
| `spec.metad.replicas` | `1` | Metad服务的副本数。 |
| `spec.metad.images` | `vesoft/nebula-metad` | Metad服务的容器镜像。 |
| `spec.metad.version` | `v2.5.1` | Metad服务的版本号。 |
| `spec.metad.storageClaim` | - | Metad服务存储配置。 |
| `spec.metad.version` | `{{nebula.branch}}` | Metad服务的版本号。 |
| `spec.metad.dataVolumeClaim.storageClassName` | - | Metad服务的数据盘存储配置。 |
| `spec.metad.logVolumeClaim.storageClassName`|-|Metad服务的日志盘存储配置。|
| `spec.storaged.replicas` | `3` | Storaged服务的副本数。 |
| `spec.storaged.images` | `vesoft/nebula-storaged` | Storaged服务的容器镜像。 |
| `spec.storaged.version` | `v2.5.1` | Storaged服务的版本号。 |
| `spec.storaged.storageClaim` | - | Storaged服务存储配置。 |
| `spec.storaged.version` | `{{nebula.branch}}` | Storaged服务的版本号。 |
| `spec.storaged.dataVolumeClaim.storageClassName` | - | Storaged服务的数据盘存储配置。 |
| `spec.storaged.logVolumeClaim.storageClassName`|-|Storaged服务的日志盘存储配置。|
| `spec.reference.name` | - | 依赖的控制器名称。 |
| `spec.schedulerName` | - | 调度器名称。 |
| `spec.imagePullPolicy` | Nebula Graph镜像的拉取策略。关于拉取策略详情,请参考[Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy)。 | 镜像拉取策略。 |


2. 创建Nebula Graph集群。
1. 创建Nebula Graph集群。

```bash
kubectl create -f apps_v1alpha1_nebulacluster.yaml
Expand All @@ -120,8 +132,8 @@
返回:

```bash
NAME GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE
nebula-cluster 1 1 1 1 3 3 31h
NAME GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE
nebula 1 1 1 1 3 3 86s
```

## 扩缩容集群
Expand All @@ -138,19 +150,28 @@
storaged:
resources:
requests:
cpu: "1"
memory: "1Gi"
cpu: "500m"
memory: "500Mi"
limits:
cpu: "1"
memory: "1Gi"
replicas: 5
image: vesoft/nebula-storaged
version: v2.5.1
storageClaim:
version: {{nebula.branch}}
dataVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: gp2
logVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: fast-disks
storageClassName: gp2
reference:
name: statefulsets.apps
version: v1
schedulerName: default-scheduler
```

2. 执行以下命令使上述更新同步至Nebula Graph集群CR中。
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ helm uninstall "${NEBULA_CLUSTER_NAME}" --namespace="${NEBULA_CLUSTER_NAMESPACE}
| 参数 | 默认值 | 描述 |
| :-------------------------- | :----------------------------------------------------------- | ------------------------------------------------------------ |
| `nameOverride` | `nil` | 覆盖集群Chart的名称。 |
| `nebula.version` | `v2.5.1` | Nebula Graph的版本。 |
| `nebula.version` | `{{nebula.branch}}` | Nebula Graph的版本。 |
| `nebula.imagePullPolicy` | `IfNotPresent` | Nebula Graph镜像的拉取策略。关于拉取策略详情,请参考[Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy)。 |
| `nebula.storageClassName` | `nil` | 持久存储卷的类型,默认使用StorageClass的名字。 |
| `nebula.schedulerName` | `default-scheduler` | Nebula Graph集群的调度器。 |
Expand Down
2 changes: 1 addition & 1 deletion docs-2.0/nebula-operator/7.operator-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

## Nebula Operator是否支持滚动升级Nebula Graph?

暂不支持
只支持升级2.5.x版本的Nebula Graph至2.6.x

## 使用本地存储是否可以保证集群稳定性?

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# 自定义Nebula Graph集群的配置参数

Nebula Graph集群中Meta、Storage、Graph服务都有各自的配置,其在用户创建的CR实例(Nebula Graph集群)的YAML文件中被定义为`config`。`config`中的设置会被映射并加载到对应服务的ConfigMap中。

!!! note

暂不支持通过Helm自定义Nebula Graph集群的配置参数。

`config`结构如下:

```
Config map[string]string `json:"config,omitempty"`
```

## 前提条件

已使用K8s创建一个集群。具体步骤,参见[使用Kubectl创建Nebula Graph集群](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md)。


## 操作步骤

以下示例使用名为`nebula`的集群说明如何在YAML中为集群的Graph服务配置`config`:

1. 执行以下命令进入`nebula`集群的编辑页面。

```bash
kubectl edit nebulaclusters.apps.nebula-graph.io nebula
```

2. 在YAML文件的`spec.graphd.config`配置项中,添加`enable_authorize`和`auth_type`。

```yaml
apiVersion: apps.nebula-graph.io/v1alpha1
kind: NebulaCluster
metadata:
name: nebula
namespace: default
spec:
graphd:
resources:
requests:
cpu: "500m"
memory: "500Mi"
limits:
cpu: "1"
memory: "1Gi"
replicas: 1
image: vesoft/nebula-graphd
version: {{nebula.branch}}
storageClaim:
resources:
requests:
storage: 2Gi
storageClassName: gp2
config: //为Graph服务自定义参数。
"enable_authorize": "true"
"auth_type": "password"
...
```

在自定义参数`enable_authorize`和`auth_type`后,Graph服务对应的ConfigMap(`nebula-graphd`)中的配置将被覆盖。

## 更多信息

有关Meta、Storage、Graph服务的配置参数的详细介绍,参见[服务配置文件](../../5.configurations-and-logs/1.configurations/1.configurations.md)。

Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# 回收PV

Nebula Operator使用持久化卷PV(Persistent Volume)和持久化卷声明PVC(Persistent Volume Claim)来存储持久化数据。如果用户不小心删除了一个Nebula Graph集群,PV和PVC对象及其数据仍可保留,以确保数据安全。

用户可以在集群的CR实例的配置文件中通过参数`enablePVReclaim`定义是否回收PV。

如果用户需要删除图空间并想保留相关数据,可以更新Nebula Graph集群,即设置`enablePVReclaim`为`true`。

## 前提条件

已使用K8s创建一个集群。具体步骤,参见[使用Kubectl创建Nebula Graph集群](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md)。

## 操作步骤

以下示例使用名为`nebula`的集群说明如何设置`enablePVReclaim`:

1. 执行以下命令进入`nebula`集群的编辑页面。

```bash
kubectl edit nebulaclusters.apps.nebula-graph.io nebula
```

2. 在YAML文件的`spec`配置项中,添加`enablePVReclaim`并设置其值为`true`。
abby-cyber marked this conversation as resolved.
Show resolved Hide resolved

```yaml
apiVersion: apps.nebula-graph.io/v1alpha1
kind: NebulaCluster
metadata:
name: nebula
spec:
enablePVReclaim: true //设置其值为true。
graphd:
image: vesoft/nebula-graphd
logVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: fast-disks
replicas: 1
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 500m
memory: 500Mi
version: {{nebula.branch}}
imagePullPolicy: IfNotPresent
metad:
dataVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: fast-disks
image: vesoft/nebula-metad
logVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: fast-disks
replicas: 1
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 500m
memory: 500Mi
version: {{nebula.branch}}
nodeSelector:
nebula: cloud
reference:
name: statefulsets.apps
version: v1
schedulerName: default-scheduler
storaged:
dataVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: fast-disks
image: vesoft/nebula-storaged
logVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: fast-disks
replicas: 3
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 500m
memory: 500Mi
version: {{nebula.branch}}
...
```
Loading