-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ReentrantLock can be held by multiple threads at the same time #123458
Comments
This is a bit of rule-lawyering, but this isn't guaranteed. The documentation of
And all of these things are fulfilled, even in the case you describe. |
That part is false, since according to the documentation lock is unlocked when a lock guard is dropped. No lock guard is ever dropped in the example, so another thread still holds the lock. If you would like to allow this execution, you would have to update the documentation to explain when and why lock was unlocked. Although, I would consider this to be clearly undesirable behavior. Furthermore we already have ThreadId with necessary guarantees could be used to address the issue. |
No, the other thread has died.
Unfortunately, getting the |
I agree that if this behavior stays as is, it should at least be very explicitly documented (something like "If a thread previously held the lock and terminated without releasing it, another thread may then be able to acquire the lock without deadlocking"). Still I'd prefer for the semantics to be "If any thread terminates while holding a lock, no other thread may ever reacquire the lock", which would require something like |
I know the point of |
As this seems important to so many people, we should definitely add those docs. I'm opposed to fixing this however, unless we find a really efficient solution. This gets called on every single |
Modulo the need to use 64-bit atomics, a // in std::thread
thread_local! {
static CURRENT: OnceCell<Thread> = const { OnceCell::new() };
static CURRENT_ID: Cell<Option<ThreadId>> = const { Cell::new(None) };
}
/// Sets the thread handle for the current thread.
///
/// Panics if the handle has been set already or when called from a TLS destructor.
pub(crate) fn set_current(thread: Thread) {
CURRENT.with(|current| {
current.set(thread).unwrap();
CURRENT_ID.set(Some(thread.id()));
});
}
/// Gets a handle to the thread that invokes it.
///
/// In contrast to the public `current` function, this will not panic if called
/// from inside a TLS destructor.
pub(crate) fn try_current() -> Option<Thread> {
CURRENT
.try_with(|current| {
let thread = current.get_or_init(|| Thread::new(imp::Thread::get_name())).clone();
CURRENT_ID.set(thread.id().as_u64().get());
})
.ok()
}
/// Gets the id of the thread that invokes it.
///
/// If called from inside a TLS destructor and the thread was never
/// assigned an id, returns `None`.
pub(crate) fn try_current_id() -> Option<ThreadId> {
if CURRENT_ID.get().is_none() {
let _ = try_current();
}
CURRENT_ID.get()
} The common case -- the thread was created by Rust and has its |
Since the tid is only ever written to while holding a mutex, we could just use a seqlock to store it on platforms which only support 32-bit atomics (which I believe all platforms with |
So the answer here is that the lock gets implicitly unlocked when the thread dies. This is not how Of course if it can be avoided at reasonable cost, that seems better.
I was about to remark on the unsoundness of seqlocks, but this variant actually seems fine -- you are using relaxed atomic accesses for the data, so there's no data race, just a race condition. Nice! |
Note that it only looks unlocked to a subsequent thread if that thread is assigned the same TLS block as the thread that held the lock. Any other thread would still deadlock, which means we currently can't guarantee either behavior consistently. |
Sure, this behavior is non-deterministic. I was just talking about the specific execution we are considering, where the second thread does acquire the lock. |
Rollup merge of rust-lang#124881 - Sp00ph:reentrant_lock_tid, r=joboet Use ThreadId instead of TLS-address in `ReentrantLock` Fixes rust-lang#123458 `ReentrantLock` currently uses the address of a thread local variable as an ID that's unique across all currently running threads. This can lead to uninituitive behavior as in rust-lang#123458 if TLS blocks get reused. This PR changes `ReentrantLock` to instead use the `ThreadId` provided by `std` as the unique ID. `ThreadId` guarantees uniqueness across the lifetime of the whole process, so we don't need to worry about reusing IDs of terminated threads. The main appeal of this PR is thus the possibility of changing the `ReentrantLock` API to guarantee that if a thread leaks a lock guard, no other thread may ever acquire that lock again. This does entail some complications: - previously, the only way to retrieve the current thread ID would've been using `thread::current().id()` which creates a temporary `Arc` and which isn't available in TLS destructors. As part of this PR, the thread ID instead gets cached in its own thread local, as suggested [here](rust-lang#123458 (comment)). - `ThreadId` is always 64-bit whereas the current implementation uses a usize-sized ID. Since this ID needs to be updated atomically, we can't simply use a single atomic variable on 32 bit platforms. Instead, we fall back to using a (sound) seqlock on 32-bit platforms, which works because only one thread at a time can write to the ID. This seqlock is technically susceptible to the ABA problem, but the attack vector to create actual unsoundness has to be very specific: - You would need to be able to lock+unlock the lock exactly 2^31 times (or a multiple thereof) while a thread trying to lock it sleeps - The sleeping thread would have to suspend after reading one half of the thread id but before reading the other half - The teared result from combining the halves of the thread ID would have to exactly line up with the sleeping thread's ID The risk of this occurring seems slim enough to be acceptable to me, but correct me if I'm wrong. This also means that the size of the lock increases by 8 bytes on 32-bit platforms, but this also shouldn't be an issue. Performance wise, I did some crude testing of the only case where this could lead to real slowdowns, which is the case of locking a `ReentrantLock` that's already locked by the current thread. On both aarch64 and x86-64, there is (expectedly) pretty much no performance hit. I didn't have any 32-bit platforms to test the seqlock performance on, so I did the next best thing and just forced the 64-bit platforms to use the seqlock implementation. There, the performance degraded by ~1-2ns/(lock+unlock) on x86-64 and ~6-8ns/(lock+unlock) on aarch64, which is measurable but seems acceptable to me seeing as 32-bit platforms should be a small minority anyways. cc `@joboet` `@RalfJung` `@CAD97`
The code below verifies that the only one thread holds the ReentrantLock by recording the owner thread id inside. It should be true since the lock is never unlocked (note that the lock guard is leaked). In practice, in some executions the assertion fails:
Inspired by report from rust-lang/miri#3450, which also explains why this fails.
Ideally the assertion failure wouldn't be among possible executions.
The text was updated successfully, but these errors were encountered: