Skip to content

Commit

Permalink
tiflash: Add instructions to improve replica sync performance (#11398) (
Browse files Browse the repository at this point in the history
  • Loading branch information
ti-chi-bot authored Nov 22, 2022
1 parent 40a7934 commit 6559a3f
Showing 1 changed file with 60 additions and 1 deletion.
61 changes: 60 additions & 1 deletion tiflash/create-tiflash-replicas.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,65 @@ To check tables without TiFlash replicas in the database, you can execute the fo
SELECT TABLE_NAME FROM information_schema.tables where TABLE_SCHEMA = "<db_name>" and TABLE_NAME not in (SELECT TABLE_NAME FROM information_schema.tiflash_replica where TABLE_SCHEMA = "<db_name>");
```

## Speed up TiFlash replication

<CustomContent platform="tidb-cloud">

> **Note:**
>
> This section is not applicable to TiDB Cloud.

</CustomContent>

Before TiFlash replicas are added, each TiKV instance performs a full table scan and sends the scanned data to TiFlash as a "snapshot" to create replicas. By default, TiFlash replicas are added slowly with fewer resources usage in order to minimize the impact on the online service. If there are spare CPU and disk IO resources in your TiKV and TiFlash nodes, you can accelerate TiFlash replication by performing the following steps.

1. Temporarily increase the snapshot write speed limit for each TiKV and TiFlash instance by using the [Dynamic Config SQL statement](https://docs.pingcap.com/tidb/stable/dynamic-config):

```sql
-- The default value for both configurations are 100MiB, i.e. the maximum disk bandwidth used for writing snapshots is no more than 100MiB/s.
SET CONFIG tikv `server.snap-max-write-bytes-per-sec` = '300MiB';
SET CONFIG tiflash `raftstore-proxy.server.snap-max-write-bytes-per-sec` = '300MiB';
```

After executing these SQL statements, the configuration changes take effect immediately without restarting the cluster. However, since the replication speed is still restricted by the PD limit globally, you cannot observe the acceleration for now.

2. Use [PD Control](https://docs.pingcap.com/tidb/stable/pd-control) to progressively ease the new replica speed limit.

The default new replica speed limit is 30, which means, approximately 30 Regions add TiFlash replicas every minute. Executing the following command will adjust the limit to 60 for all TiFlash instances, which doubles the original speed:

```shell
tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store limit all engine tiflash 60 add-peer
```

> In the preceding command, you need to replace `<CLUSTER_VERSION>` with the actual cluster version and `<PD_ADDRESS>:2379` with the address of any PD node. For example:
>
> ```shell
> tiup ctl:v6.1.1 pd -u http://192.168.1.4:2379 store limit all engine tiflash 60 add-peer
> ```

Within a few minutes, you will observe a significant increase in CPU and disk IO resource usage of the TiFlash nodes, and TiFlash should create replicas faster. At the same time, the TiKV nodes' CPU and disk IO resource usage increases as well.
If the TiKV and TiFlash nodes still have spare resources at this point and the latency of your online service does not increase significantly, you can further ease the limit, for example, triple the original speed:
```shell
tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store limit all engine tiflash 90 add-peer
```
3. After the TiFlash replication is complete, revert to the default configuration to reduce the impact on online services.
Execute the following PD Control command to restore the default new replica speed limit:
```shell
tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store limit all engine tiflash 30 add-peer
```
Execute the following SQL statements to restore the default snapshot write speed limit:
```sql
SET CONFIG tikv `server.snap-max-write-bytes-per-sec` = '100MiB';
SET CONFIG tiflash `raftstore-proxy.server.snap-max-write-bytes-per-sec` = '100MiB';
```
## Set available zones
<CustomContent platform="tidb-cloud">
Expand Down Expand Up @@ -187,7 +246,7 @@ When configuring replicas, if you need to distribute TiFlash replicas to multipl
3. PD schedules the replicas based on the labels. In this example, PD respectively schedules two replicas of the table `t` to two available zones. You can use pd-ctl to view the scheduling.
```shell
> tiup ctl:<version> pd -u<pd-host>:<pd-port> store
> tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store
...
"address": "172.16.5.82:23913",
Expand Down

0 comments on commit 6559a3f

Please sign in to comment.