-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sync: add atomic UnlockAndRLock to RWMutex #4026
Labels
Comments
It might be nice to have a method to downgrade from a wlock to an rlock atomically. I will defer to Dmitry on that. As Remy points out, a method to upgrade from an rlock to a wlock atomically is not possible. Having a wlock means no one else has an rlock, so it is impossible to satisfy this sequence: - goroutine 1 rlocks - goroutine 2 rlocks - goroutine 1 calls upgrade - goroutine 2 calls upgrade Goroutine 1 cannot be upgraded to a wlock until goroutine 2 releases the rlock, and vice versa. Deadlock. Labels changed: added priority-later, removed priority-triage. |
>Unfortunately the obvious approach (RUnlock + Lock, then modify, then Unlock + RLock) is racy because the condition that caused you to promote to a write lock may have changed between the Unlock and the RLock. This necessitates a loop structure. You usually do: RLock() if trysearch() { return } RUnlock() Lock() if trysearch() { return } mutate() Unlock() |
The situation where it came up recently was a list that is sorted only lazily during a read. We assume adds are infrequent and we want the reads to be able to run concurrently most of the time: type T struct { mu sync.RWMutex list []int sorted bool } func (t *T) add(x int) { t.mu.Lock() t.list = append(t.list, x) t.sorted = false t.mu.RUnlock() } func (t *T) read(f func(int)) { t.mu.RLock() for !t.sorted { t.mu.RUnlock() t.mu.Lock() if !t.sorted { sort.Ints(t.list) t.sorted = true } t.mu.Unlock() t.mu.RLock() } for _, x := range t.list { f(x) } t.mu.RUnlock() } It says 'for !t.sorted' instead of 'if !t.sorted' to avoid a race where an add comes in between the Unlock and the RLock at the bottom of the loop body. With an atomic UnlockAndRLock method, the Unlock + RLock sequence could become a single call and the for !t.sorted would become if !t.sorted - no need for a retry loop. I don't know if it's a good idea to provide, but that is how it came up. Russ |
I see it as less of a performance issue and more of a correctness issue. I feel I have a reasonable grasp of using sync.RWMutex correctly, but I was writing the code that rsc is referring to and I got it wrong. I'd wager I'm better than the average Go programmer, so that means the majority of programmers have a good chance of getting it wrong. I agree it's not a common use case. I don't know how to evaluate that. But if you're using a RWMutex, it seems like converting a read lock to a write lock (or vice versa) is not particularly strange, and so providing a way to do it correctly seems like a worthwhile extension to the API. I don't feel particularly strongly either way, and am happy to defer to Russ and Dmitry. (I do think it's still possible to promote a read lock to a write lock and have sensible semantics: calling Promote would exclude future (read and write) locks, relinquish their existing read lock, and would grab a write lock when the remaining read locks are released. Anyone else doing a concurrent Promote would also relinquish their existing read lock, and would join a queue to get the write lock after the first guy releases his write lock. It just requires a bit of extra accounting.) |
The problem with mutexes is that things can seem not particularly strange and yet still be strange. I think RUnlockAndLock is off the table: an atomic RUnlockAndLock is impossible, and a non-atomic one might as well be written as two calls so it's clearly not atomic. An atomic UnlockAndRLock would provide new functionality and perhaps simplify an occasional bit of code, but it's a rare case. Will leave this open low-priority in case it becomes more common. Labels changed: added priority-someday, removed priority-later. Status changed to LongTerm. |
I have ran more than once into the promotion-would-be-useful-here situation, though I admit I didn't realized the deadlock problem (cf. #3). Thinking because of it now again, what I would actually like to use is the sharing intents lock, which I think doesn't suffer from the deadlock problem. See: http://en.wikipedia.org/wiki/Multiple_granularity_locking AFAICS, RWMutex is a subset of a MGL when/where S==R and X==W. I think the MGL is not a terribly complicated state machine and I guess it's feasible to hack it on top of existing primitives. Still it's not trivial to code (and test!) properly, so having a reliable, ready made solution in the stdlib would be IMO pretty reasonable. |
That's the point of the MGL, you can't do that. The solution is to use SIX instead of S when your intents are to possibly promote to X. SIX is not compatible with other SIX, but is compatible with S. So on SIX->X promotion, the owner of SIX waits only for other's S to release and then he is known to be alone to go ->X (hence no deadlock possible here). It may or may not make anything faster. It may make some code paths much simpler: If you find under R, that you need W, you have to release R and acquire W. Meanwhile state may have changed. On the path SIX->X that cannot happen. Sometimes a big win. |
Humm.... SIX is incompatible with SIX and X, so for Russ's example it won't allow any concurrency at all, thus making it all senseless. At least we need other use cases (where S is still frequent but SIX is infrequent), not saying that all looks overly complicated. Btw, I am not sure why they say SIX is incompatible with S, it must be compatible (unless I misunderstood something). |
MGL will not allow any more concurrency than RWM does now, sometimes even less. It just enables the promotion in a deadlock free way *and* preserves (the protected by MGL) state while promoting/waiting for promotion. The later is IMO actually the important property. A real life anti-example of where I could use it: https://github.com/cznic/strutil/blob/master/strutil.go#L261 "Anti" because here it would make code simpler - but slower on average. So MGL *can* make for poor code. OTOH, the state consistency may be priceless elsewhere or even the solution enabling condition. On the locking table: at the second look I now don't understand it either. Maybe R is not S but actually IS and where I wrote SIX there should be IX instead. Also, then I would have to unlearn what I thought for years SIX is. Confused now (myself B-) |
@rsc actually the upgrade from RLock to Lock should be possible if both: - Mutex supported TryLock (or another way to lock the mutex if not already locked) - the upgrade operation was allowed to fail in case the upgrade would lead to deadlock To reuse the example in comment #3 (assume there are no pending writers): - goroutine 1 rlocks - goroutine 2 rlocks - goroutine 1 calls upgrade -> no pending writers, w.TryLock succeeds, wait for other readers to release their RLocks - goroutine 2 calls upgrade -> pending writers, w.TryLock fails: reject upgrade, deadlock avoided Sure, users will still have to code the fallback path - but at least for uncontented locks it should be faster. |
This issue was closed.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The text was updated successfully, but these errors were encountered: