diff --git a/v22.1/node-shutdown.md b/v22.1/node-shutdown.md index b95aaac164b..fa015c6e273 100644 --- a/v22.1/node-shutdown.md +++ b/v22.1/node-shutdown.md @@ -45,7 +45,7 @@ When a node is permanently removed, the following stages occur in sequence: An operator [initiates the decommissioning process](#decommission-the-node) on the node. -The node's [`is_decommissioning`](cockroach-node.html#node-status) field is set to `true` and its `membership` status is set to `decommissioning`, which causes its replicas to be rebalanced to other nodes. +The node's [`is_decommissioning`](cockroach-node.html#node-status) field is set to `true` and its `membership` status is set to `decommissioning`, which causes its replicas to be rebalanced to other nodes. If the rebalancing stalls, replicas that have yet to move are printed to the SQL shell, and written to the `cockroach.log` logfile. The node's [`/health?ready=1` endpoint](monitoring-and-alerting.html#health-ready-1) continues to consider the node "ready" so that the node can function as a gateway to route SQL client connections to relevant data. @@ -345,6 +345,17 @@ Although [draining automatically follows decommissioning](#draining), we recomme Run [`cockroach node decommission`](cockroach-node.html) to decommission the node and rebalance its range replicas. For specific instructions and additional guidelines, see the [example](#remove-nodes). +If the decommissioning process stalls, the replica ranges that have failed to move off the decommissioning node are printed to the SQL shell, and written to the `cockroach.log` file by default. + +~~~ shell +possible decommission stall detected +n5 still has replica id 6 for range r1 +n5 still has replica id 8 for range r2 +n5 still has replica id 6 for range r3 +n5 still has replica id 2 for range r4 +n5 still has replica id 3 for range r5 +~~~ + {{site.data.alerts.callout_danger}} Do **not** terminate the node process, delete the storage volume, or remove the VM before a `decommissioning` node has [changed its membership status](#status-change) to `decommissioned`. Prematurely terminating the process will prevent the node from rebalancing all of its range replicas onto other nodes gracefully, cause transient query errors in client applications, and leave the remaining ranges under-replicated and vulnerable to loss of [quorum](architecture/replication-layer.html#overview) if another node goes down. {{site.data.alerts.end}}