Skip to content

Commit

Permalink
kv: move range lease checks and transfers below latching
Browse files Browse the repository at this point in the history
Needed for cockroachdb#57688.

This commit reworks interactions between range leases and requests, pulling the
consultation of a replica's lease down below the level of latching while keeping
heavy-weight operations like lease acquisitions above the level of latching.
Doing so comes with several benefits, some related specifically to non-blocking
transactions and some more general.

Background

Before discussing the change here, let's discuss how lease checks, lease
acquisitions, lease redirection, and lease transfers currently work. Today,
requests consult a replica's range lease before acquiring latches. If the lease
is good to go, the request proceeds to acquire latches. If the lease is not
currently held by any replica, the lease is acquired (again, above latches)
through a coalesced `RequestLeaseRequest`. If the lease is currently held by a
different replica, the request is redirected to that replica using a
`NotLeaseHolderError`. Finally, if the lease check notices a lease transfer in
progress, the request is optimistically redirected to the prospective new
leaseholder.

This all works, but only because it's been around for so long. Due to the lease
check above latching, we're forced to go to great lengths to get the
synchronization with in-flight requests right, which leads to very subtle logic.
This is most apparent with lease transfers, which properly synchronize with
ongoing requests through a delicate dance with the HLC clock and some serious
"spooky action at a distance". Every request bumps the local HLC clock in
`Store.Send`, then grabs the replica mutex, checks for an ongoing lease
transfer, drops the replica mutex, then evaluates. Lease transfers grab the
replica mutex, grab a clock reading from the local HLC clock, bump the
minLeaseProposedTS to stop using the current lease, drops the replica mutex,
then proposes a new lease using this clock reading as its start time. This works
only because each request bumps the HLC clock _before_ checking the lease, so
the HLC clock can serve as an upper bound on every request that has made it
through the lease check by the time the lease transfer begins.

This structure is inflexible, subtle, and falls over as soon as we try to extend
it.

Motivation

The primary motivation for pulling lease checks and transfers below latching is
that the interaction between requests and lease transfers is incompatible with
future-time operations, a key part of the non-blocking transaction project. This
is because the structure relies on the HLC clock providing an upper bound on the
time of any request served by an outgoing leaseholder, which is attached to
lease transfers to ensure that the new leaseholder does not violate any request
served on the old leaseholder. But this is quickly violated once we start
serving future-time operations, which don't bump the HLC clock.

So we quickly need to look elsewhere for this information. The obvious place to
look for this information is the timestamp cache, which records the upper bound
read time of each key span in a range, even if this upper bound time is
synthetic. If we could scan the timestamp cache and attach the maximum read time
to a lease transfer (through a new field, not as the lease start time), we'd be
good. But this runs into a problem, because if we just read the timestamp cache
under the lease transfer's lock, we can't be sure we didn't miss any in-progress
operations that had passed the lease check previously but had not yet bumped the
timestamp cache. Maybe they are still reading? So the custom locking quickly
runs into problems (I said it was inflexible!).

Solution

The solution here is to stop relying on custom locking for lease transfers by
pulling the lease check below latching and by pulling the determination of the
transfer's start time below latching. This ensures that during a lease transfer,
we don't only block new requests, but we also flush out in-flight requests. This
means that by the time we look at the timestamp cache during the evaluation of a
lease transfer, we know it has already been updated by any request that will be
served under the current lease.

This commit doesn't make the switch from consulting the HLC clock to consulting
the timestamp cache during TransferLease request evaluation, but a future commit
will.

Other benefits

Besides this primary change, a number of other benefits fall out of this
restructuring.

1. we avoid relying on custom synchronization around leases, instead relying
   on more the more general latching mechanism.
2. we more closely aligns `TransferLeaseRequest` and `SubsumeRequest`, which now
   both grab clock readings during evaluation and will both need to forward
   their clock reading by the upper-bound of a range's portion of the timestamp
   cache. It makes sense that these two requests would be very similar, as both
   are responsible for renouncing the current leaseholder's powers and passing
   them elsewhere.
3. we more closely aligns the lease acquisition handling with the handling of
   `MergeInProgressError` by classifying a new `InvalidLeaseError` as a
   "concurrencyRetryError" (see isConcurrencyRetryError). This fits the existing
   structure of: grab latches, check range state, drop latches and wait if
   necessary, retry.
4. in doing so, we fuse the critical section of lease checks and the rest of
   the checks in `checkExecutionCanProceed`. So we grab the replica read lock
   one fewer time in the request path.
5. we move one step closer to a world where we can "ship a portion of the
   timestamp cache" during lease transfers (and range merges) to avoid retry
   errors / transaction aborts on the new leaseholder. This commit will be
   followed up by one that ships a very basic summary of a leaseholder's
   timestamp cache during lease transfers. However, this would now be trivial to
   extend with higher resolution information, given some size limit. Perhaps we
   prioritize the local portion of the timestamp cache to avoid txn aborts?
6. now that leases are checked below latching, we no longer have the potential
   for an arbitrary delay due to latching and waiting on locks between when the
   lease is checked and when a request evaluates, so we no longer need checks
   like [this](https://github.com/cockroachdb/cockroach/blob/7bcb2cef794da56f6993f1b27d5b6a036016242b/pkg/kv/kvserver/replica_write.go#L119).
7. we pull observed timestamp handling a layer down, which will be useful to
   address plumbing comments on cockroachdb#57077.

Other behavioral changes

There are two auxiliary behavioral changes made by this commit that deserve
attention.

The first is that during a lease transfer, operations now block on the outgoing
leaseholder instead of immediately redirecting to the expected next leaseholder.
This has trade-offs. On one hand, this delays redirection, which may make lease
transfers more disruptive to ongoing traffic. On the other, we've seen in the
past that the optimistic redirection is not an absolute win. In many cases, it
can lead to thrashing and lots of wasted work, as the outgoing leaseholder and
the incoming leaseholder both point at each other and requests ping-pong between
them. We've seen this cause serious issues like cockroachdb#22837 and cockroachdb#32367, which we
addressed by adding exponential backoff in the client in 89d349a. So while this
change may make average-case latency during lease transfers slightly worse, it
will keep things much more orderly, avoid wasted work, and reduce worse case
latency during lease transfers.

The other behavioral changes made by this commit is that observed timestamps are
no longer applied to a request to reduce its MaxOffset until after latching and
locking, instead of before. This sounds concerning, but it's actually not for
two reasons. First, as of cockroachdb#57136, a transactions uncertainty interval is no
longer considered by the lock table because locks in a transaction's uncertainty
interval are no longer considered write-read conflicts. Instead, those locks'
provisional values are considered at evaluation time to be uncertain. Second,
the fact that the observed timestamp-limited MaxOffset was being used for
latching is no longer correct in a world with synthetic timestamps (see cockroachdb#57077),
so we would have had to make this change anyway. So put together, this
behavioral change isn't meaningful.
  • Loading branch information
nvanbenschoten committed Jan 17, 2021
1 parent 62f165f commit 35a8b4b
Show file tree
Hide file tree
Showing 30 changed files with 1,279 additions and 684 deletions.
64 changes: 64 additions & 0 deletions pkg/kv/kvserver/batcheval/cmd_lease_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,10 @@ import (

"github.com/cockroachdb/cockroach/pkg/base"
"github.com/cockroachdb/cockroach/pkg/roachpb"
"github.com/cockroachdb/cockroach/pkg/storage"
"github.com/cockroachdb/cockroach/pkg/testutils"
"github.com/cockroachdb/cockroach/pkg/testutils/serverutils"
"github.com/cockroachdb/cockroach/pkg/util/hlc"
"github.com/cockroachdb/cockroach/pkg/util/leaktest"
"github.com/cockroachdb/cockroach/pkg/util/log"
"github.com/stretchr/testify/require"
Expand Down Expand Up @@ -120,10 +123,13 @@ func TestLeaseCommandLearnerReplica(t *testing.T) {
}
desc := roachpb.RangeDescriptor{}
desc.SetReplicas(roachpb.MakeReplicaSet(replicas))
manual := hlc.NewManualClock(123)
clock := hlc.NewClock(manual.UnixNano, time.Nanosecond)
cArgs := CommandArgs{
EvalCtx: (&MockEvalCtx{
StoreID: voterStoreID,
Desc: &desc,
Clock: clock,
}).EvalContext(),
Args: &roachpb.TransferLeaseRequest{
Lease: roachpb.Lease{
Expand Down Expand Up @@ -156,3 +162,61 @@ func TestLeaseCommandLearnerReplica(t *testing.T) {
`replica (n2,s2):2LEARNER of type LEARNER cannot hold lease`
require.EqualError(t, err, expForLearner)
}

// TestLeaseTransferForwardsStartTime tests that during a lease transfer, the
// start time of the new lease is determined during evaluation, after latches
// have granted the lease transfer full mutual exclusion over the leaseholder.
func TestLeaseTransferForwardsStartTime(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)

testutils.RunTrueAndFalse(t, "epoch", func(t *testing.T, epoch bool) {
ctx := context.Background()
db := storage.NewDefaultInMem()
defer db.Close()
batch := db.NewBatch()
defer batch.Close()

replicas := []roachpb.ReplicaDescriptor{
{NodeID: 1, StoreID: 1, Type: roachpb.ReplicaTypeVoterFull(), ReplicaID: 1},
{NodeID: 2, StoreID: 2, Type: roachpb.ReplicaTypeVoterFull(), ReplicaID: 2},
}
desc := roachpb.RangeDescriptor{}
desc.SetReplicas(roachpb.MakeReplicaSet(replicas))
manual := hlc.NewManualClock(123)
clock := hlc.NewClock(manual.UnixNano, time.Nanosecond)

nextLease := roachpb.Lease{
Replica: replicas[1],
Start: clock.NowAsClockTimestamp(),
}
if epoch {
nextLease.Epoch = 1
} else {
exp := nextLease.Start.ToTimestamp().Add(9*time.Second.Nanoseconds(), 0)
nextLease.Expiration = &exp
}
cArgs := CommandArgs{
EvalCtx: (&MockEvalCtx{
StoreID: 1,
Desc: &desc,
Clock: clock,
}).EvalContext(),
Args: &roachpb.TransferLeaseRequest{
Lease: nextLease,
},
}

manual.Increment(1000)
beforeEval := clock.NowAsClockTimestamp()

res, err := TransferLease(ctx, batch, cArgs, nil)
require.NoError(t, err)

// The proposed lease start time should be assigned at eval time.
propLease := res.Replicated.State.Lease
require.NotNil(t, propLease)
require.True(t, nextLease.Start.Less(propLease.Start))
require.True(t, beforeEval.Less(propLease.Start))
})
}
10 changes: 9 additions & 1 deletion pkg/kv/kvserver/batcheval/cmd_lease_transfer.go
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,14 @@ func TransferLease(
// LeaseRejectedError before going through Raft.
prevLease, _ := cArgs.EvalCtx.GetLease()

// Forward the lease's start time to a current clock reading. Now
// that we're holding latches across the entire range, we know that
// this time is greater than the timestamps at which any request was
// serviced by the leaseholder before it stopped serving requests
// (i.e. before the TransferLease request acquired latches).
newLease := args.Lease
newLease.Start.Forward(cArgs.EvalCtx.Clock().NowAsClockTimestamp())

// For now, don't allow replicas of type LEARNER to be leaseholders. There's
// no reason this wouldn't work in principle, but it seems inadvisable. In
// particular, learners can't become raft leaders, so we wouldn't be able to
Expand All @@ -84,5 +92,5 @@ func TransferLease(

log.VEventf(ctx, 2, "lease transfer: prev lease: %+v, new lease: %+v", prevLease, args.Lease)
return evalNewLease(ctx, cArgs.EvalCtx, readWriter, cArgs.Stats,
args.Lease, prevLease, false /* isExtension */, true /* isTransfer */)
newLease, prevLease, false /* isExtension */, true /* isTransfer */)
}
2 changes: 2 additions & 0 deletions pkg/kv/kvserver/batcheval/cmd_push_txn.go
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,8 @@ func PushTxn(
return result.Result{}, errors.Errorf("request timestamp %s less than pushee txn timestamp %s", h.Timestamp, args.PusheeTxn.WriteTimestamp)
}
now := cArgs.EvalCtx.Clock().Now()
// TODO(nvanbenschoten): remove this limitation. But when doing so,
// keep the h.Timestamp.Less(args.PushTo) check above.
if now.Less(h.Timestamp) {
// The batch's timestamp should have been used to update the clock.
return result.Result{}, errors.Errorf("request timestamp %s less than current clock time %s", h.Timestamp, now)
Expand Down
42 changes: 22 additions & 20 deletions pkg/kv/kvserver/client_merge_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1391,6 +1391,7 @@ func TestStoreRangeMergeRHSLeaseExpiration(t *testing.T) {
storeCfg := kvserver.TestStoreConfig(nil)
storeCfg.TestingKnobs.DisableReplicateQueue = true
storeCfg.TestingKnobs.DisableMergeQueue = true
storeCfg.TestingKnobs.AllowLeaseRequestProposalsWhenNotLeader = true
storeCfg.Clock = nil // manual clock

// The synchronization in this test is tricky. The merge transaction is
Expand All @@ -1414,21 +1415,24 @@ func TestStoreRangeMergeRHSLeaseExpiration(t *testing.T) {
}

// Install a hook to observe when a get or a put request for a special key,
// rhsSentinel, acquires latches and begins evaluating.
// rhsSentinel, hits a MergeInProgressError and begins waiting on the merge.
const reqConcurrency = 10
rhsSentinel := roachpb.Key("rhs-sentinel")
reqAcquiredLatch := make(chan struct{}, reqConcurrency)
storeCfg.TestingKnobs.TestingLatchFilter = func(_ context.Context, ba roachpb.BatchRequest) *roachpb.Error {
for _, r := range ba.Requests {
req := r.GetInner()
switch req.Method() {
case roachpb.Get, roachpb.Put:
if req.Header().Key.Equal(rhsSentinel) {
reqAcquiredLatch <- struct{}{}
reqWaitingOnMerge := make(chan struct{}, reqConcurrency)
storeCfg.TestingKnobs.TestingConcurrencyRetryFilter = func(
_ context.Context, ba roachpb.BatchRequest, pErr *roachpb.Error,
) {
if _, ok := pErr.GetDetail().(*roachpb.MergeInProgressError); ok {
for _, r := range ba.Requests {
req := r.GetInner()
switch req.Method() {
case roachpb.Get, roachpb.Put:
if req.Header().Key.Equal(rhsSentinel) {
reqWaitingOnMerge <- struct{}{}
}
}
}
}
return nil
}

mtc := &multiTestContext{
Expand Down Expand Up @@ -1546,19 +1550,17 @@ func TestStoreRangeMergeRHSLeaseExpiration(t *testing.T) {
time.Sleep(time.Millisecond)
}

// Wait for the get and put requests to acquire latches, which is as far as
// they can get while the merge is in progress. Then wait a little bit
// longer. This tests that the requests really do get stuck waiting for the
// merge to complete without depending too heavily on implementation
// details.
// Wait for the get and put requests to begin waiting on the merge to
// complete. Then wait a little bit longer. This tests that the requests
// really do get stuck waiting for the merge to complete without depending
// too heavily on implementation details.
for i := 0; i < reqConcurrency; i++ {
select {
case <-reqAcquiredLatch:
// Latch acquired.
case <-reqWaitingOnMerge:
// Waiting on merge.
case pErr := <-reqErrs:
// Requests may never make it to the latch acquisition if s1 has not
// yet learned s2's lease is expired. Instead, we'll see a
// NotLeaseholderError.
// Requests may never wait on the merge if s1 has not yet learned
// s2's lease is expired. Instead, we'll see a NotLeaseholderError.
require.IsType(t, &roachpb.NotLeaseHolderError{}, pErr.GetDetail())
}
}
Expand Down
6 changes: 6 additions & 0 deletions pkg/kv/kvserver/client_replica_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3569,6 +3569,12 @@ func TestDiscoverIntentAcrossLeaseTransferAwayAndBack(t *testing.T) {
err = tc.MoveRangeLeaseNonCooperatively(rangeDesc, tc.Target(1))
require.NoError(t, err)

// Send an arbitrary request to the range to update the range descriptor
// cache with the new lease. This prevents the rollback from getting stuck
// waiting on latches held by txn2's read on the old leaseholder.
_, err = kvDB.Get(ctx, "c")
require.NoError(t, err)

// Roll back txn1.
err = txn1.Rollback(ctx)
require.NoError(t, err)
Expand Down
6 changes: 4 additions & 2 deletions pkg/kv/kvserver/concurrency/concurrency_manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ func NewManager(cfg Config) Manager {
lt: lt,
ltw: &lockTableWaiterImpl{
st: cfg.Settings,
clock: cfg.Clock,
stopper: cfg.Stopper,
ir: cfg.IntentResolver,
lt: lt,
Expand Down Expand Up @@ -441,9 +442,10 @@ func (g *Guard) HoldingLatches() bool {
return g != nil && g.lg != nil
}

// AssertLatches asserts that the guard is non-nil and holding latches.
// AssertLatches asserts that the guard is non-nil and holding latches, if the
// request is supposed to hold latches while evaluating in the first place.
func (g *Guard) AssertLatches() {
if !g.HoldingLatches() {
if shouldAcquireLatches(g.Req) && !g.HoldingLatches() {
panic("expected latches held, found none")
}
}
Expand Down
21 changes: 19 additions & 2 deletions pkg/kv/kvserver/concurrency/concurrency_manager_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ import (
// debug-latch-manager
// debug-lock-table
// debug-disable-txn-pushes
// debug-set-clock ts=<secs>
// reset
//
func TestConcurrencyManagerBasic(t *testing.T) {
Expand Down Expand Up @@ -119,7 +120,6 @@ func TestConcurrencyManagerBasic(t *testing.T) {
ReadTimestamp: ts,
MaxTimestamp: maxTS,
}
txn.UpdateObservedTimestamp(c.nodeDesc.NodeID, ts.UnsafeToClockTimestamp())
c.registerTxn(txnName, txn)
return ""

Expand Down Expand Up @@ -456,6 +456,17 @@ func TestConcurrencyManagerBasic(t *testing.T) {
c.disableTxnPushes()
return ""

case "debug-set-clock":
var secs int
d.ScanArgs(t, "ts", &secs)

nanos := int64(secs) * time.Second.Nanoseconds()
if nanos < c.manual.UnixNano() {
d.Fatalf(t, "manual clock must advance")
}
c.manual.Set(nanos)
return ""

case "reset":
if n := mon.numMonitored(); n > 0 {
d.Fatalf(t, "%d requests still in flight", n)
Expand Down Expand Up @@ -486,6 +497,8 @@ type cluster struct {
nodeDesc *roachpb.NodeDescriptor
rangeDesc *roachpb.RangeDescriptor
st *clustersettings.Settings
manual *hlc.ManualClock
clock *hlc.Clock
m concurrency.Manager

// Definitions.
Expand Down Expand Up @@ -514,10 +527,13 @@ type txnPush struct {
}

func newCluster() *cluster {
manual := hlc.NewManualClock(123 * time.Second.Nanoseconds())
return &cluster{
st: clustersettings.MakeTestingClusterSettings(),
nodeDesc: &roachpb.NodeDescriptor{NodeID: 1},
rangeDesc: &roachpb.RangeDescriptor{RangeID: 1},
st: clustersettings.MakeTestingClusterSettings(),
manual: manual,
clock: hlc.NewClock(manual.UnixNano, time.Nanosecond),

txnsByName: make(map[string]*roachpb.Transaction),
requestsByName: make(map[string]concurrency.Request),
Expand All @@ -532,6 +548,7 @@ func (c *cluster) makeConfig() concurrency.Config {
NodeDesc: c.nodeDesc,
RangeDesc: c.rangeDesc,
Settings: c.st,
Clock: c.clock,
IntentResolver: c,
TxnWaitMetrics: txnwait.NewMetrics(time.Minute),
}
Expand Down
26 changes: 25 additions & 1 deletion pkg/kv/kvserver/concurrency/lock_table_waiter.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ import (
"github.com/cockroachdb/cockroach/pkg/settings"
"github.com/cockroachdb/cockroach/pkg/settings/cluster"
"github.com/cockroachdb/cockroach/pkg/storage/enginepb"
"github.com/cockroachdb/cockroach/pkg/util/hlc"
"github.com/cockroachdb/cockroach/pkg/util/log"
"github.com/cockroachdb/cockroach/pkg/util/stop"
"github.com/cockroachdb/cockroach/pkg/util/syncutil"
Expand Down Expand Up @@ -92,6 +93,7 @@ var LockTableDeadlockDetectionPushDelay = settings.RegisterDurationSetting(
// lockTableWaiterImpl is an implementation of lockTableWaiter.
type lockTableWaiterImpl struct {
st *cluster.Settings
clock *hlc.Clock
stopper *stop.Stopper
ir IntentResolver
lt lockTable
Expand Down Expand Up @@ -555,12 +557,34 @@ func (w *lockTableWaiterImpl) pushHeader(req Request) roachpb.Header {
// could race). Since the subsequent execution of the original request
// might mutate the transaction, make a copy here. See #9130.
h.Txn = req.Txn.Clone()

// We must push at least to req.Timestamp, but for transactional
// requests we actually want to go all the way up to the top of the
// transaction's uncertainty interval. This allows us to not have to
// restart for uncertainty if the push succeeds and we come back and
// read.
h.Timestamp.Forward(req.Txn.MaxTimestamp)
//
// However, because we intend to read on the same node, we can limit
// this to a clock reading from the local clock, relying on the fact
// that an observed timestamp from this node will limit our local max
// timestamp when we return to read.
//
// We intentionally do not use an observed timestamp directly to limit
// the push timestamp, because observed timestamps are not applicable in
// some cases (e.g. across lease changes). So to avoid an infinite loop
// where we continue to push to an unusable observed timestamp and
// continue to find the pushee in our uncertainty interval, we instead
// use the present time to limit the push timestamp, which is less
// optimal but is guaranteed to progress.
//
// There is some inherent raciness here, because the lease may move
// between when we push and when we later read. In such cases, we may
// need to push again, but expect to eventually succeed in reading,
// either after lease movement subsides or after the reader's read
// timestamp surpasses its max timestamp.
localMaxTS := req.Txn.MaxTimestamp
localMaxTS.Backward(w.clock.Now())
h.Timestamp.Forward(localMaxTS)
}
return h
}
Expand Down
3 changes: 3 additions & 0 deletions pkg/kv/kvserver/concurrency/lock_table_waiter_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ import (
"fmt"
"math/rand"
"testing"
"time"

"github.com/cockroachdb/cockroach/pkg/kv/kvserver/concurrency/lock"
"github.com/cockroachdb/cockroach/pkg/kv/kvserver/intentresolver"
Expand Down Expand Up @@ -94,11 +95,13 @@ func setupLockTableWaiterTest() (*lockTableWaiterImpl, *mockIntentResolver, *moc
st := cluster.MakeTestingClusterSettings()
LockTableLivenessPushDelay.Override(&st.SV, 0)
LockTableDeadlockDetectionPushDelay.Override(&st.SV, 0)
manual := hlc.NewManualClock(123)
guard := &mockLockTableGuard{
signal: make(chan struct{}, 1),
}
w := &lockTableWaiterImpl{
st: st,
clock: hlc.NewClock(manual.UnixNano, time.Nanosecond),
stopper: stop.NewStopper(),
ir: ir,
lt: &mockLockTable{},
Expand Down
Loading

0 comments on commit 35a8b4b

Please sign in to comment.