Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update rereject leader docs #6183

Merged
merged 9 commits into from
Aug 20, 2021
2 changes: 1 addition & 1 deletion config-templates/geo-redundancy-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ server_configs:
pd:
replication.location-labels: ["zone","dc","rack","host"]
replication.max-replicas: 5
label-property:
label-property: # Since TiDB 5.2, the `label-property` configuration is not supported by default. To set the replica policy, use the placement rules.
reject-leader:
- key: "dc"
value: "sha"
Expand Down
8 changes: 6 additions & 2 deletions geo-distributed-deployment-topology.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,9 +84,13 @@ This section describes the key parameter configuration of the TiDB geo-distribut
value: "sha"
```

> **Note:**
>
> Since TiDB 5.2, the `label-property` configuration is not supported by default. To set the replica policy, use the [placement rules](/configure-placement-rules.md).

For the further information about labels and the number of Raft Group replicas, see [Schedule Replicas by Topology Labels](/schedule-replicas-by-topology-labels.md).

> **Note:**
>
en-jin19 marked this conversation as resolved.
Show resolved Hide resolved
> - You do not need to manually create the `tidb` user in the configuration file. The TiUP cluster component automatically creates the `tidb` user on the target machines. You can customize the user, or keep the user consistent with the control machine.
> - If you configure the deployment directory as a relative path, the cluster will be deployed in the home directory of the user.

[Schedule Replicas by Topology Labels](/schedule-replicas-by-topology-labels.md) further explains the use of labels and the number of Raft Group replicas.
TomShawn marked this conversation as resolved.
Show resolved Hide resolved
4 changes: 4 additions & 0 deletions multi-data-centers-in-one-city-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,10 @@ member leader_priority pdName2 4
member leader_priority pdName3 3
```

> **Note:**
>
> Since TiDB 5.2, the `label-property` configuration is not supported by default. To set the replica policy, use the [placement rules](/configure-placement-rules.md).

**Disadvantages:**

- Write scenarios are still affected by network latency across DCs. This is because Raft follows the majority protocol and all written data must be replicated to at least two DCs.
Expand Down
4 changes: 4 additions & 0 deletions three-data-centers-in-two-cities-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,10 @@ In the deployment of three DCs in two cities, to optimize performance, you need
```yaml
config set label-property reject-leader dc 3
```

> **Note:**
>
> Since TiDB 5.2, the `label-property` configuration is not supported by default. To set the replica policy, use the [placement rules](/configure-placement-rules.md).

- Configure the priority of PD. To avoid the situation where the PD leader is in another city (IDC3), you can increase the priority of local PD (in Seattle) and decrease the priority of PD in another city (San Francisco). The larger the number, the higher the priority.

Expand Down