Skip to content

Commit

Permalink
Merge #103478
Browse files Browse the repository at this point in the history
103478: concurrency: get rid of reservations in the lock table r=nvanbenschoten a=arulajmani

First 3 commits from #103373

This patch removes the notion of reservations from the lock table.
Reservations served as a claim that prevented multiple requests from
racing when a lock was released. Typically, when a lock was released,
only the first transactional writer was released from the list of
queued writers. It would do so by claiming a "reservation" on the
lock.

All requests that are sequenced through the lock table are associated
with a sequence number based on arrival order. These sequence numbers
are used to uphold ~fairness as requests are sequenced. They also serve
a correctness purpose -- because all locks are not known upfront (as
uncontended replicated locks may be discovered during evaluation),
sequence numbers are used to break potential deadlocks that arise from
out of order locking. This motivated the concept of reservation
breaking, which could happen if a lower sequence number request
encountered a reservation by a request with a higher sequence number.
This would lead to somewhat complex state management, where requests
could  move from being reservations to inactive waiters multiple times
during their lifetime. A lot of this can be simplified if we make no
distinction between a reservation and an inactive waiter.

This patch gets rid of reservations entirely. Instead, it offers a new
invariant:

The head of the list of waiting writers should always be an inactive,
transactional writer if the lock isn't held.

In practice, this works out functionally the same as how reservations
operated, albeit with less state transitions. Being an inactive waiter
at the head of the lock's wait-queue serves as the request's claim on
the key. As such, verbiage that referenced "reservations" previously is
now updated to talk about claims and claimant transactions. There's a
bit of comment churn as a result. There's also some datadriven test
churn as part of this patch -- but it should be helpful in convincing
ourselves that this just changes concepts, and not functionality. In
particular, what was previously a reservation holder, is now the first
inactive queued qwriter at the lock.

Closes #103361

Release note: None

Co-authored-by: Arul Ajmani <[email protected]>
  • Loading branch information
craig[bot] and arulajmani committed May 31, 2023
2 parents 3ff847a + ff9c357 commit f0a23e1
Show file tree
Hide file tree
Showing 28 changed files with 937 additions and 821 deletions.
25 changes: 16 additions & 9 deletions pkg/kv/kvserver/concurrency/concurrency_control.go
Original file line number Diff line number Diff line change
Expand Up @@ -772,15 +772,22 @@ type lockTableGuard interface {
// that conflict.
CheckOptimisticNoConflicts(*lockspanset.LockSpanSet) (ok bool)

// IsKeyLockedByConflictingTxn returns whether the specified key is locked or
// reserved (see lockTable "reservations") by a conflicting transaction in the
// lockTableGuard's snapshot of the lock table, given the caller's own desired
// locking strength. If so, true is returned. If the key is locked, the lock
// holder is also returned. Otherwise, if the key is reserved, nil is also
// returned. A transaction's own lock or reservation does not appear to be
// locked to itself (false is returned). The method is used by requests in
// conjunction with the SkipLocked wait policy to determine which keys they
// should skip over during evaluation.
// IsKeyLockedByConflictingTxn returns whether the specified key is claimed
// (see claimantTxn()) by a conflicting transaction in the lockTableGuard's
// snapshot of the lock table, given the caller's own desired locking
// strength. If so, true is returned. If the key is locked, the lock holder is
// also returned. Otherwise, if the key was claimed by a concurrent request
// still sequencing through the lock table, but the lock isn't held (yet), nil
// is also returned.
//
// If the lock has been claimed (held or otherwise) by the transaction itself,
// there's no conflict to speak of, so false is returned. In cases where the
// lock isn't held, but the lock has been claimed by the transaction itself,
// we do not make a distinction about which request claimed the key -- it
// could either be the request itself, or a different concurrent request from
// the same transaction; The specifics do not affect the caller.
// This method is used by requests in conjunction with the SkipLocked wait
// policy to determine which keys they should skip over during evaluation.
IsKeyLockedByConflictingTxn(roachpb.Key, lock.Strength) (bool, *enginepb.TxnMeta)
}

Expand Down
30 changes: 19 additions & 11 deletions pkg/kv/kvserver/concurrency/concurrency_manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -412,9 +412,10 @@ func (m *managerImpl) PoisonReq(g *Guard) {
func (m *managerImpl) FinishReq(g *Guard) {
// NOTE: we release latches _before_ exiting lock wait-queues deliberately.
// Either order would be correct, but the order here avoids non-determinism in
// cases where a request A holds both latches and lock wait-queue reservations
// and has a request B waiting on its reservations. If request A released its
// reservations before releasing its latches, it would be possible for B to
// cases where a request A holds both latches and has claimed some keys by
// virtue of being the first request in a lock wait-queue and has a request B
// waiting on its claim. If request A released its claim (by exiting the lock
// wait-queue) before releasing its latches, it would be possible for B to
// beat A to the latch manager and end up blocking on its latches briefly. Not
// only is this confusing in traces, but it is slightly less efficient than if
// request A released latches before letting anyone waiting on it in the lock
Expand Down Expand Up @@ -759,15 +760,22 @@ func (g *Guard) CheckOptimisticNoLatchConflicts() (ok bool) {
return g.lm.CheckOptimisticNoConflicts(g.lg, g.Req.LatchSpans)
}

// IsKeyLockedByConflictingTxn returns whether the specified key is locked or
// reserved (see lockTable "reservations") by a conflicting transaction in the
// Guard's snapshot of the lock table, given the caller's own desired locking
// IsKeyLockedByConflictingTxn returns whether the specified key is claimed
// (see claimantTxn()) by a conflicting transaction in the lockTableGuard's
// snapshot of the lock table, given the caller's own desired locking
// strength. If so, true is returned. If the key is locked, the lock holder is
// also returned. Otherwise, if the key is reserved, nil is also returned. A
// transaction's own lock or reservation does not appear to be locked to itself
// (false is returned). The method is used by requests in conjunction with the
// SkipLocked wait policy to determine which keys they should skip over during
// evaluation.
// also returned. Otherwise, if the key was claimed by a concurrent request
// still sequencing through the lock table, but the lock isn't held (yet), nil
// is also returned.
//
// If the lock has been claimed (held or otherwise) by the transaction itself,
// there's no conflict to speak of, so false is returned. In cases where the
// lock isn't held, but the lock has been claimed by the transaction itself,
// we do not make a distinction about which request claimed the key -- it
// could either be the request itself, or a different concurrent request from
// the same transaction; The specifics do not affect the caller.
// This method is used by requests in conjunction with the SkipLocked wait
// policy to determine which keys they should skip over during evaluation.
func (g *Guard) IsKeyLockedByConflictingTxn(
key roachpb.Key, strength lock.Strength,
) (bool, *enginepb.TxnMeta) {
Expand Down
Loading

0 comments on commit f0a23e1

Please sign in to comment.