Skip to content

Latest commit

 

History

History
75 lines (49 loc) · 4.5 KB

tiup-component-cluster-scale-in.md

File metadata and controls

75 lines (49 loc) · 4.5 KB
title summary
tiup cluster scale-in
The `tiup cluster scale-in` command is used to scale in the cluster by taking specified nodes offline, removing them from the cluster, and deleting remaining files. Components like TiKV and TiFlash are handled asynchronously and require additional steps to check and clean up. The command also includes options for node specification, forceful removal, transfer timeout, and help information.

tiup cluster scale-in

The tiup cluster scale-in command is used to scale in the cluster, which takes the services of the specified nodes offline, removes the specified nodes from the cluster, and deletes the remaining files from those nodes.

Particular handling of components' offline process

Because the TiKV and TiFlash components are taken offline asynchronously (which requires TiUP to remove the node through API first) and the stopping process takes a long time (which requires TiUP to continuously check whether the node is successfully taken offline), the TiKV and TiFlash components are handled particularly as follows:

  • For TiKV and TiFlash components:

    1. TiUP Cluster takes the node offline through API and directly exits without waiting for the process to be completed.

    2. To check the status of the nodes being scaled in, you need to execute the tiup cluster display command and wait for the status to become Tombstone.

    3. To clean up the nodes in the Tombstone status, you need to execute the tiup cluster prune command. The tiup cluster prune command performs the following operations:

      • Stops the services of the nodes that have been taken offline.
      • Cleans up the data files of the nodes that have been taken offline.
      • Updates the cluster topology and removes the nodes that have been taken offline.

For other components:

  • When taking the PD components offline, TiUP Cluster quickly deletes the specified nodes from the cluster through API, stops the service of the specified PD nodes, and then deletes the related data files from the nodes.
  • When taking other components down, TiUP Cluster directly stops the node services and deletes the related data files from the specified nodes.

Syntax

tiup cluster scale-in <cluster-name> [flags]

<cluster-name> is the name of the cluster to scale in. If you forget the cluster name, you can check it using the tiup cluster list command.

Options

-N, --node

  • Specifies the nodes to take down. Multiple nodes are separated by commas.
  • Data type: STRING
  • There is no default value. This option is mandatory and the value must be not null.

--force

  • Controls whether to forcibly remove the specified nodes from the cluster. Sometimes, the host of the node to take offline might be down, which makes it impossible to connect to the node via SSH for operations, so you can forcibly remove the node from the cluster using the --force option.
  • Data type: BOOLEAN
  • This option is disabled by default with the false value. To enable this option, add this option to the command, and either pass the true value or do not pass any value.

Warning:

When you use this option to forcibly remove TiKV or TiFlash nodes that are in service or are pending offline, these nodes will be deleted immediately without waiting for data to be migrated. This imposes a very high risk of data loss. If data loss occurs in the region where the metadata is located, the entire cluster will be unavailable and unrecoverable.

--transfer-timeout

  • When a PD or TiKV node is to be removed, the Region leader on the node will be transferred to another node first. Because the transferring process takes some time, you can set the maximum waiting time (in seconds) by configuring --transfer-timeout. After the timeout, the tiup cluster scale-in command skips waiting and starts the scaling-in directly.
  • Data type: UINT
  • The option is enabled by default with 600 seconds (the default value) passed in.

Note:

If a PD or TiKV node is taken offline directly without waiting for the leader transfer to be completed, the service performance might jitter.

-h, --help

  • Prints the help information.
  • Data type: BOOLEAN
  • This option is disabled by default with the false value. To enable this option, add this option to the command, and either pass the true value or do not pass any value.

Output

Shows the logs of the scaling-in process.

<< Back to the previous page - TiUP Cluster command list