kv: issue replication lag probe before lease transfer #96304
Labels
A-kv
Anything in KV that doesn't belong in a more specific category.
C-enhancement
Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
T-kv
KV Team
Problem
In #81561, we outlined the risk of range unavailability due to lease transfers sent to lagging followers. Recall that a follower replica must apply a lease transfer through its Raft log before it can take over as a leaseholder. This means catching up on its log first, if it is behind. During the period where the incoming leaseholder is catching up on its log, the range is unavailable.
In that issue, we focused on followers that needed a Raft snapshot to catch up on their log. These situations result in high and unpredictable replication lag. Snapshots require a bulk transfer of data which is queued and paced.
However, there is a less severe form of the same issue. Lease transfers may still cause unavailability if the incoming leaseholder is connected to the leader's log (i.e. does not need a snapshot), but is trailing by many log entries. In general, any meaningful replication lag is a problem for lease transfers.
Proposed solution
To place a soft bound on the period of unavailability, we could first measure the replication lag of the prospective leaseholder before transferring it the lease. If the lag is too much, we could reject the lease transfer. One way to accomplish this would be to measure the duration of a fake write to be applied on the prospective leaseholder when pushed through Raft. We have most of the infrastructure needed to support this. See
WaitForApplication
.Note that we cannot mandate that the follower is all the way caught up on its log. This could prevent followers that have a small but persistent lag due to network latency and steady traffic from ever qualifying for the lease, which is what we saw in #38065.
Jira issue: CRDB-24054
Epic CRDB-39898
The text was updated successfully, but these errors were encountered: