Skip to content

Commit

Permalink
concurrency: move finalizedTxnCache into lock table
Browse files Browse the repository at this point in the history
This is a cleanup in preparation for the future, and also
has some, possibly minor, immediate benefits.

In the future, the lock table will support multiple intents for
the same key if all but one are known to be finalized. So the
finalizedTxnCache belongs in the lock table data-structure.
Additionally, we will support intent resolution without holding
latches, which has some implications on data-structure
consistency: request evaluation will not be allowed to add
discovered intents to the lock table since the discovery may be
stale. This PR is not changing this discovery behavior since we
need it for now (due to interleaved intents), but it moves us
along the path towards the lock table data-structure not
relying on external behavior for maintaining its in-memory
"cache" of locks. Specifically, removing intents from the lock
table when the intent is still present in the engine is not
principled. We currently do this in two places:
- for optimizing limited scans: a later PR will fix this properly
  by checking the lock table after request evaluation, as
  outlined in cockroachdb#49973.
- using the finalizedTxnCache in the lockTableWaiterImpl: this
  use is changed in this PR. The code in the lock table also does
  removal of intents before resolution, but there is a TODO to
  fix that in the future. It should be easier to do this with the
  behavior contained in the lock table.

The immediate benefits, which may not have any practical
significance, are:
- We no longer resolve unreplicated locks -- they are simply
  removed.
- A replicated lock is removed from the lock table data-structure
  only when the requester has finished a scan and is in a
  position to do resolution. Earlier one could remove the lock
  but block on another lock, and not do intent resolution on
  the first lock. This would cause wasteful evaluation of other
  requests.

Informs cockroachdb#41720

Release note: None
  • Loading branch information
sumeerbhola committed Jan 6, 2021
1 parent d87fa94 commit 9146295
Show file tree
Hide file tree
Showing 9 changed files with 1,595 additions and 167 deletions.
16 changes: 11 additions & 5 deletions pkg/kv/kvserver/concurrency/concurrency_control.go
Original file line number Diff line number Diff line change
Expand Up @@ -567,6 +567,12 @@ type lockTable interface {
// txn.WriteTimestamp.
UpdateLocks(*roachpb.LockUpdate) error

// Informs the lock table that a transaction is finalized. This is used
// by the lock table in a best-effort manner to avoid waiting on locks
// of finalized transactions and telling the caller via
// lockTableGuard.ResolveBeforeEvaluation to resolve a batch of intents.
TransactionIsFinalized(*roachpb.Transaction)

// String returns a debug string representing the state of the lockTable.
String() string
}
Expand All @@ -588,6 +594,11 @@ type lockTableGuard interface {

// CurState returns the latest waiting state.
CurState() waitingState

// ResolveBeforeScanning lists the locks to resolve before scanning again.
// This must be called after the waiting state has transitioned to
// doneWaiting.
ResolveBeforeScanning() []roachpb.LockUpdate
}

// lockTableWaiter is concerned with waiting in lock wait-queues for locks held
Expand Down Expand Up @@ -646,11 +657,6 @@ type lockTableWaiter interface {
// and, in turn, remove this method. This will likely fall out of pulling
// all replicated locks into the lockTable.
WaitOnLock(context.Context, Request, *roachpb.Intent) *Error

// ClearCaches wipes all caches maintained by the lockTableWaiter. This is
// primarily used to recover memory when a replica loses a lease. However,
// it is also used in tests to reset the state of the lockTableWaiter.
ClearCaches()
}

// txnWaitQueue holds a collection of wait-queues for transaction records.
Expand Down
12 changes: 5 additions & 7 deletions pkg/kv/kvserver/concurrency/concurrency_manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,9 @@ func (c *Config) initDefaults() {
func NewManager(cfg Config) Manager {
cfg.initDefaults()
m := new(managerImpl)
lt := &lockTableImpl{
maxLocks: cfg.MaxLockTableSize,
}
*m = managerImpl{
// TODO(nvanbenschoten): move pkg/storage/spanlatch to a new
// pkg/storage/concurrency/latch package. Make it implement the
Expand All @@ -82,14 +85,12 @@ func NewManager(cfg Config) Manager {
cfg.SlowLatchGauge,
),
},
lt: &lockTableImpl{
maxLocks: cfg.MaxLockTableSize,
},
lt: lt,
ltw: &lockTableWaiterImpl{
st: cfg.Settings,
stopper: cfg.Stopper,
ir: cfg.IntentResolver,
lm: m,
lt: lt,
disableTxnPushing: cfg.DisableTxnPushing,
},
// TODO(nvanbenschoten): move pkg/storage/txnwait to a new
Expand Down Expand Up @@ -344,9 +345,6 @@ func (m *managerImpl) OnRangeLeaseUpdated(seq roachpb.LeaseSequence, isLeasehold
const disable = true
m.lt.Clear(disable)
m.twq.Clear(disable)
// Also clear caches, since they won't be needed any time soon and
// consume memory.
m.ltw.ClearCaches()
}
}

Expand Down
Loading

0 comments on commit 9146295

Please sign in to comment.