release-20.1: kv/concurrency: permit lock timestamp regression across durabilities #47139
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport 1/1 commits from #47101.
/cc @cockroachdb/release
Fixes #46526.
Fixes #46779.
Follow up to #46391.
This change adjusts the lockTable to allow lock timestamp regressions
when necessary. Specifically, it allows a lock's timestamp as reported
by getLockerInfo to regress if it is acquired at a lower timestamp and a
different durability than it was previously held with. This is necessary
to support because the hard constraint which we must uphold here that
the lockHolderInfo for a replicated lock cannot diverge from the
replicated state machine in such a way that its timestamp in the
lockTable exceeds that in the replicated keyspace. If this invariant
were to be violated, we'd risk infinite lock-discovery loops for
requests that conflict with the lock as is written in the replicated
state machine but not as is reflected in the lockTable.
Lock timestamp regressions are safe from the perspective of other
transactions because the request which re-acquired the lock at the lower
timestamp must have been holding a write latch at or below the new
lock's timestamp. This means that no conflicting requests could be
evaluating concurrently. Instead, all will need to re-scan the lockTable
once they acquire latches and will notice the reduced timestamp at that
point, which may cause them to conflict with the lock even if they had
not conflicted before. In a sense, it is no different than the first
time a lock is added to the lockTable.
I considered special-casing this logic to look at the new lock's
durability and only allow the regression in the case that the new lock
was replicated and instead forwarding the acquisition timestamp in the
case that the new lock was unreplicated, but doing so seemed complex and
not clearly worth it. The rest of the lock-table supports these lock
timestamp regressions, so adding complexity to conditionally avoid the
case for certain state transitions, based on the lock durabilities,
didn't seem worthwhile. I'm happy to reconsider this decision.
Release note (bug fix): CDC no longer combines with long running
transactions to trigger an assertion with the text "lock timestamp
regression".
Release justification: fixes a high-priority bug in existing
functionality. The bug could crash a server if the right sequence of
events occurred. This was typically rare, but was much more common when
CDC was in use.