Nebula Operator manages NebulaGraph clusters on Kubernetes and automates tasks related to operating a NebulaGraph cluster. It evolved from NebulaGraph Cloud Service, makes NebulaGraph a truly cloud-native database.
See install/uninstall nebula operator .
$ kubectl create -f config/samples/apps_v1alpha1_nebulacluster.yaml
A none ha-mode nebula cluster will be created.
$ kubectl get pods -l app.kubernetes.io/instance=nebula
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 1m
nebula-metad-0 1/1 Running 0 1m
nebula-storaged-0 1/1 Running 0 1m
nebula-storaged-1 1/1 Running 0 1m
nebula-storaged-2 1/1 Running 0 1m
See client service for how to access nebula clusters created by the operator.
If you are working with kubeadm locally, create a nodePort service and test that nebula is responding:
$ kubectl create -f config/samples/graphd-nodeport-service.yaml
/ # nebula-console -u user -p password --address=192.168.8.26 --port=32236
2021/04/15 16:50:23 [INFO] connection pool is initialized successfully
Welcome to Nebula Graph!
(user@nebula) [(none)]>
Destroy the nebula cluster:
$ kubectl delete -f config/samples/apps_v1alpha1_nebulacluster.yaml
Create a nebula cluster:
$ kubectl create -f config/samples/apps_v1alpha1_nebulacluster.yaml
In config/samples/apps_v1alpha1_nebulacluster.yaml
the initial storaged replicas is 3.
Modify the file and change replicas
from 3 to 5.
storaged:
resources:
requests:
cpu: "1"
memory: "1Gi"
limits:
cpu: "1"
memory: "1Gi"
replicas: 5
image: vesoft/nebula-storaged
version: v2.5.0
storageClaim:
resources:
requests:
storage: 2Gi
storageClassName: fast-disks
Apply the replicas change to the cluster CR:
$ kubectl apply -f config/samples/apps_v1alpha1_nebulacluster.yaml
The storaged cluster will scale to 5 members (5 pods):
$ kubectl get pods -l app.kubernetes.io/instance=nebula
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 2m
nebula-metad-0 1/1 Running 0 2m
nebula-storaged-0 1/1 Running 0 2m
nebula-storaged-1 1/1 Running 0 2m
nebula-storaged-2 1/1 Running 0 2m
nebula-storaged-3 1/1 Running 0 5m
nebula-storaged-4 1/1 Running 0 5m
Similarly we can decrease the size of the cluster from 5 back to 3 by changing the replicas field again and reapplying the change.
storaged:
resources:
requests:
cpu: "1"
memory: "1Gi"
limits:
cpu: "1"
memory: "1Gi"
replicas: 3
image: vesoft/nebula-storaged
version: v2.5.0
storageClaim:
resources:
requests:
storage: 2Gi
storageClassName: fast-disks
We should see that storaged cluster will eventually reduce to 3 pods:
$ kubectl get pods -l app.kubernetes.io/instance=nebula
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 10m
nebula-metad-0 1/1 Running 0 10m
nebula-storaged-0 1/1 Running 0 10m
nebula-storaged-1 1/1 Running 0 10m
nebula-storaged-2 1/1 Running 0 10m
In addition, you can Install Nebula Cluster with helm.
If the minority of nebula components crash, the nebula operator will automatically recover the failure. Let's walk through this in the following steps.
Create a nebula cluster:
$ kubectl create -f config/samples/apps_v1alpha1_nebulacluster.yaml
Wait until pods are up. Simulate a member failure by deleting a storaged pod:
$ kubectl delete pod nebula-storaged-2 --now
The nebula operator will recover the failure by creating a new pod nebula-storaged-2
:
$ kubectl get pods -l app.kubernetes.io/instance=nebula
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 15m
nebula-metad-0 1/1 Running 0 15m
nebula-storaged-0 1/1 Running 0 15m
nebula-storaged-1 1/1 Running 0 15m
nebula-storaged-2 1/1 Running 0 19s
Please refer to FAQ.md
Feel free to reach out if you have any questions. The maintainers of this project are reachable via:
- Filing an issue against this repo
Contributions are welcome and greatly appreciated.
- Start by some issues
- Submit Pull Requests to us. Please refer to how-to-contribute.
nebula-operator refers to tidb-operator. They have made a very good product. We have a similar architecture, although the product pattern is different from the application scenario, we would like to express our gratitude here.
NebulaGraph is under the Apache 2.0 license. See the LICENSE file for details.