Skip to content

Commit

Permalink
add-scaling-faq (#2116)
Browse files Browse the repository at this point in the history
  • Loading branch information
abby-cyber authored Jun 1, 2023
1 parent 56325e8 commit db25f24
Show file tree
Hide file tree
Showing 2 changed files with 37 additions and 7 deletions.
42 changes: 36 additions & 6 deletions docs-2.0/20.appendix/0.FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -338,32 +338,62 @@ $ ./nebula-graphd --version

Run `rpm -qa |grep nebula` to check the version of NebulaGraph.

### "How to scale my cluster up/down or out/in?"

{{ ent.ent_begin }}
### "How to scale out or scale in? (Enterprise Edition only)"
!!! enterpriseonly

- You can scale Graph and Storage services with Dashboard Enterprise Edition. For details, see [Scale](../nebula-dashboard-ent/4.cluster-operator/operator/scale.md).
- You can also use NebulaGraph Operator to scale Graph and Storage services. For details, see [Deploy NebulaGraph clusters with Kubectl](../nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) and [Deploy NebulaGraph clusters with Helm](../nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md).
The cluster scaling function has not been officially released in the community edition. The operations involving `SUBMIT JOB BALANCE DATA REMOVE` and `SUBMIT JOB BALANCE DATA` are experimental features in the community edition and the functionality is not stable. Before using it in the community edition, make sure to back up your data first and set `enable_experimental_feature` and `enable_data_balance` to `true` in the [Graph configuration file](../5.configurations-and-logs/1.configurations/3.graph-config.md).

#### Increase or decrease the number of Meta, Graph, or Storage nodes

NebulaGraph {{ nebula.release }} does not provide any commands or tools to support automatic scale out/in. You can refer to the following steps:

1. Scale out and scale in metad: The metad process can not be scaled out or scale in. The process cannot be moved to a new machine. You cannot add a new metad process to the service.

!!! note

You can use the [Meta transfer script tool](https://github.com/vesoft-inc/nebula/blob/master/scripts/meta-transfer-tools.sh) to migrate Meta services. Note that the Meta-related settings in the configuration files of Storage and Graph services need to be modified correspondingly.
You can use the [Meta transfer script tool](https://github.com/vesoft-inc/nebula/blob/master/scripts/meta-transfer-tools.sh) to migrate Meta services. Note that the Meta-related settings in the configuration files of Storage and Graph services need to be modified correspondingly.

2. Scale in graphd: Remove the IP of the graphd process from the code in the client. Close this graphd process.

3. Scale out graphd: Prepare the binary and config files of the graphd process in the new host. Modify the config files and add all existing addresses of the metad processes. Then start the new graphd process.

4. Scale in storaged: See [Balance remove command](../8.service-tuning/load-balance.md). After the command is finished, stop this storaged process.

!!! caution

- Before executing this command to migrate the data in the specified Storage node, make sure that the number of other Storage nodes is sufficient to meet the set replication factor. For example, if the replication factor is set to 3, then before executing this command, make sure that the number of other Storage nodes is greater than or equal to 3.

- If there are multiple space partitions in the Storage node to be migrated, execute this command in each space to migrate all space partitions in the Storage node.

5. Scale out storaged: Prepare the binary and config files of the storaged process in the new host, Modify the config files and add all existing addresses of the metad processes. Then register the storaged process to the metad, and then start the new storaged process. For details, see [Register storaged services](../2.quick-start/3.1add-storage-hosts.md).

You also need to run [Balance Data and Balance leader](../8.service-tuning/load-balance.md) after scaling in/out storaged.

{{ ent.ent_end }}
{{ent.ent_begin}}
You can scale Graph and Storage services with Dashboard Enterprise Edition. For details, see [Scale](../nebula-dashboard-ent/4.cluster-operator/operator/scale.md).

You can also use NebulaGraph Operator to scale Graph and Storage services. For details, see [Deploy NebulaGraph clusters with Kubectl](../nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) and [Deploy NebulaGraph clusters with Helm](../nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md).
{{ent.ent_end}}

#### Add or remove disks in the Storage nodes

Currently, Storage cannot dynamically recognize new added disks. You can add or remove disks in the Storage nodes of the distributed cluster by following these steps:

1. Execute `SUBMIT JOB BALANCE DATA REMOVE <ip:port>` to migrate data in the Storage node with the disk to be added or removed to other Storage nodes.

!!! caution

- Before executing this command to migrate the data in the specified Storage node, make sure that the number of other Storage nodes is sufficient to meet the set replication factor. For example, if the replication factor is set to 3, then before executing this command, make sure that the number of other Storage nodes is greater than or equal to 3.

- If there are multiple space partitions in the Storage node to be migrated, execute this command in each space to migrate all space partitions in the Storage node.

2. Execute `DROP HOSTS <ip:port>` to remove the Storage node with the disk to be added or removed.

3. In the configuration file of all Storage nodes, configure the path of the new disk to be added or removed through `--data_path`, see [Storage configuration file](../5.configurations-and-logs/1.configurations/4.storage-config.md) for details.
4. Execute `ADD HOSTS <ip:port>` to re-add the Storage node with the disk to be added or removed.
5. As needed, execute `SUBMIT JOB BALANCE DATA` to evenly distribute the shards of the current space to all Storage nodes and execute `SUBMIT JOB BALANCE LEADER` command to balance the leaders in all spaces. Before running the command, select a space.


### "After changing the name of the host, the old one keeps displaying `OFFLINE`. What should I do?"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ For all parameters and their current values, see [Configurations](1.configuratio

| Name | Predefined value | Description |Whether supports runtime dynamic modifications|
| :------------------------------- | :--------------- | :------------------------ |:------------------|
| `data_path` | `data/storage` | Specifies the data storage path. Multiple paths are separated with commas. {{ comm.comm_begin }}One RocksDB instance corresponds to one path.{{ comm.comm_end }}{{ ent.ent_begin }}One RocksDB instance corresponds to one partition.{{ ent.ent_end }}| No|
| `data_path` | `data/storage` | Specifies the data storage path. Multiple paths are separated with commas. For NebulaGraph of the community edition, one RocksDB instance corresponds to one path. For NebulaGraph of the enterprise edition, one RocksDB instance corresponds to one partition.| No|
| `minimum_reserved_bytes` | `268435456` | Specifies the minimum remaining space of each data storage path. When the value is lower than this standard, the cluster data writing may fail. This configuration is measured in bytes. | No|
| `rocksdb_batch_size` | `4096` | Specifies the block cache for a batch operation. The configuration is measured in bytes. | No|
| `rocksdb_block_cache` | `4` | Specifies the block cache for BlockBasedTable. The configuration is measured in megabytes.| No|
Expand Down

0 comments on commit db25f24

Please sign in to comment.