Skip to content

Commit

Permalink
add operator faqs (#2134)
Browse files Browse the repository at this point in the history
* add operator faqs

* Update 7.operator-faq.md
  • Loading branch information
abby-cyber authored Jun 26, 2023
1 parent 35539a6 commit 45b1dee
Show file tree
Hide file tree
Showing 3 changed files with 54 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ The following shows how to scale out a NebulaGraph cluster by changing the numbe
3. Check the number of Storage services.

```bash
kubectl get pods -l app.kubernetes.io/cluster=nebula
kubectl get pods -l app.kubernetes.io/cluster=nebula
```

Output:
Expand Down
49 changes: 49 additions & 0 deletions docs-2.0/nebula-operator/7.operator-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,52 @@ It is suggested to back up data in advance so that you can roll back data in cas
## Is the replica in the Operator docs the same as the replica in the NebulaGraph core docs?

They are different concepts. A replica in the Operator docs indicates a pod replica in K8s, while a replica in the core docs is a replica of a NebulaGraph storage partition.


## How to view the logs of each service in the NebulaGraph cluster?

The logs for the NebulaGraph cluster are not gathered in the K8s cluster, which also means that they cannot be retrieved through the `kubectl logs` command. To obtain the logs of each cluster service, you need to access the container and view the log files that are stored inside. This is the only option available for users to get the service logs individually in the NebulaGraph cluster.

Steps to view the logs of each service in the NebulaGraph cluster:

```bash
# To view the name of the pod where the container you want to access is located.
# Replace <cluster-name> with the name of the cluster.
kubectl get pods -l app.kubernetes.io/cluster=<cluster-name>

# To access the container within the pod, such as the nebula-graphd-0 container.
kubectl exec -it nebula-graphd-0 -- /bin/bash

# To go to /usr/local/nebula/logs directory to view the logs.
cd /usr/local/nebula/logs
```

## How to resolve the `host not found:nebula-<metad|storaged|graphd>-0.nebula.<metad|storaged|graphd>-headless.default.svc.cluster.local` error?

This error is generally caused by a DNS resolution failure, and you need to check whether the cluster domain has been modified. If the cluster domain has been modified, you need to modify the `kubernetesClusterDomain` field in the NebulaGraph Operator configuration file accordingly. The steps for modifying the Operator configuration file are as follows:

1. View the Operator configuration file.

```yaml
[abby@master ~]$ helm show values nebula-operator/nebula-operator
image:
nebulaOperator:
image: vesoft/nebula-operator:{{operator.tag}}
imagePullPolicy: Always
kubeRBACProxy:
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0
imagePullPolicy: Always
kubeScheduler:
image: registry.k8s.io/kube-scheduler:v1.24.11
imagePullPolicy: Always

imagePullSecrets: []
kubernetesClusterDomain: "" # The cluster domain name, and the default is cluster.local.
```
2. Modify the value of the `kubernetesClusterDomain` field to the updated cluster domain name.

```
helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=<nebula-operator-system> --version={{operator.release}} --set kubernetesClusterDomain=<cluster-domain>
```
<nebula-operator-system> is the namespace where Operator is located and <cluster-domain> is the updated domain name.
8 changes: 4 additions & 4 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -240,11 +240,11 @@ extra:
branch: release-1.2
tag: v1.2.0
operator:
release: 1.5.0
tag: v1.5.0
branch: release-1.5
release: 1.4.2
tag: v1.4.2
branch: release-1.4
upgrade_from: 3.0.0
upgrade_to: 3.5.0
upgrade_to: 3.4.1
exporter:
release: 3.3.0
branch: release-3.3
Expand Down

0 comments on commit 45b1dee

Please sign in to comment.