Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

concurrency: ensure releasing shared locks leads to correct pushes #111554

Merged
merged 1 commit into from
Oct 2, 2023

Conversation

arulajmani
Copy link
Collaborator

When there are multiple shared locks on a key, any active waiters will push the first of the lock holders (aka the claimant). Previously, when the claimaint was finalized, we weren't recomputing the waiting state for any active waiters to push the new claimaint. As a result, in such a scenario, waiters would end up blocking indefinitely without pushing.

This is non-ideal, as it means we're not going to be running deadlock/liveness detection. Waiters would hang indefinitely if there was a deadlock/liveness issue. This patch fixes this behaviour by recomputing new waiting state in cases where a shared lock is released but the key isn't unlocked.

Epic: none

Release note: None

@arulajmani arulajmani requested a review from a team as a code owner October 1, 2023 16:45
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@arulajmani arulajmani force-pushed the sl-concurrencyManagerTests branch from fc4fdfc to c30d0df Compare October 1, 2023 20:36
@arulajmani arulajmani changed the title concurrency: ensure releasing shared locks releases to correct pushes concurrency: ensure releasing shared locks leads to correct pushes Oct 1, 2023
When there are multiple shared locks on a key, any active waiters will
push the first of the lock holders (aka the claimant). Previously, when
the claimaint was finalized, we weren't recomputing the waiting state
for any active waiters to push the new claimaint. As a result, in such
a scenario, waiters would end up blocking indefinitely without pushing.

This is non-ideal, as it means we're not going to be running
deadlock/liveness detection. Waiters would hang indefinitely if there
was a deadlock/liveness issue. This patch fixes this behaviour by
recomputing new waiting state in cases where a shared lock is released
but the key isn't unlocked.

Epic: none

Release note: None
@nvanbenschoten nvanbenschoten force-pushed the sl-concurrencyManagerTests branch from c30d0df to 59bb505 Compare October 2, 2023 17:58
Copy link
Member

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm: I patched in my review comments and also added shared locks back into TestLockTableConcurrentRequests.

This PR also motivated me to open #111596. I had wanted to introduce ignored sequence numbers into TestConcurrencyManagerBasic, but they run into the problem described in that issue. We'll eventually want to address it.

Reviewed 4 of 4 files at r1.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @arulajmani)


pkg/kv/kvserver/concurrency/lock_table.go line 3235 at r1 (raw file):

			// Note that if the lock that was cleared didn't belong to a transaction
			// all waiters were pushing, the call to informActiveWaiters will no-op.
			kl.informActiveWaiters()

We'll need this same logic down below in the if !isLocked branch. Done.


pkg/kv/kvserver/concurrency/datadriven_util_test.go line 178 at r1 (raw file):

	}
	maybeGetStr := func() lock.Strength {
		// TODO(arul): everywhere we use "strength" instead of "str" -- use

I ended up going in the other direction, using "str" everywhere for these concurrency manager tests.

@nvanbenschoten
Copy link
Member

bors r+

@craig
Copy link
Contributor

craig bot commented Oct 2, 2023

Build succeeded:

@craig craig bot merged commit b0e0182 into cockroachdb:master Oct 2, 2023
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants