Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stuck acquiring priority lock #146

Closed
travisperson opened this issue Feb 4, 2021 · 0 comments
Closed

Stuck acquiring priority lock #146

travisperson opened this issue Feb 4, 2021 · 0 comments

Comments

@travisperson
Copy link

Currently experiencing issues around acquiring priority locks in the lotus project. This issue is to track it from the bellperson side.

(Lotus issue for reference filecoin-project/lotus#5446)

I don't know if this is actually an issue in the bellperson library or an integration issue into lotus. But we are seeing things get stuck up around acquiring the priority lock. This has occurred both while running window post as well as winning post.

The issue seems to be much more common when running in docker, I can reproduce the issue with consistent results. It occurs during the very first winning post of the network. I have observed once a winning post completing, but locks up on the second.

I haven't been able to get the issue to occur when running the lotus bench tool in docker though.

This issue occurs regardless of the value of BELLMAN_NO_GPU.

Jan 26 20:13:46 preminer-0.interop.fildev.network lotus-miner[8878]: {"level":"info","ts":"2021-01-26T20:13:46.928+0000","logger":"storage_proofs_core::compound_proof","caller":"/home/circleci/.cargo/registry/src/github.com-1ecc6299db9ec823/storage-proofs-core-5.4.0/src/compound_proof.rs:86","msg":"vanilla_proofs:finish"}
Jan 26 20:13:46 preminer-0.interop.fildev.network lotus-miner[8878]: {"level":"info","ts":"2021-01-26T20:13:46.971+0000","logger":"storage_proofs_core::compound_proof","caller":"/home/circleci/.cargo/registry/src/github.com-1ecc6299db9ec823/storage-proofs-core-5.4.0/src/compound_proof.rs:92","msg":"snark_proof:start"}
Jan 26 20:13:46 preminer-0.interop.fildev.network lotus-miner[8878]: {"level":"info","ts":"2021-01-26T20:13:46.972+0000","logger":"bellperson::groth16::prover","caller":"/home/circleci/.cargo/registry/src/github.com-1ecc6299db9ec823/bellperson-0.12.1/src/groth16/prover.rs:274","msg":"Bellperson 0.12.1 is being used!"}
Jan 26 20:13:47 preminer-0.interop.fildev.network lotus-miner[8878]: {"level":"info","ts":"2021-01-26T20:13:47.663+0000","logger":"bellperson::groth16::prover","caller":"/home/circleci/.cargo/registry/src/github.com-1ecc6299db9ec823/bellperson-0.12.1/src/groth16/prover.rs:309","msg":"starting proof timer"}
Jan 26 20:13:47 preminer-0.interop.fildev.network lotus-miner[8878]: {"level":"debug","ts":"2021-01-26T20:13:47.663+0000","logger":"bellperson::gpu::locks","caller":"/home/circleci/.cargo/registry/src/github.com-1ecc6299db9ec823/bellperson-0.12.1/src/gpu/locks.rs:40","msg":"Acquiring priority lock..."}

Issue has been seen in bellman 0.12.1, 0.12.3

Possibility Related
#126

vmx added a commit that referenced this issue Feb 10, 2021
The proof computation is done asynchronously, tasks are created which are then
waited for. If such a "waiter" is in the same thread pool as the task, it can
happen that the task is waited for before it get executed. That leads to a
deadlock.

The solution is to make sure that the parts that are waiting for the computation
run on the main thread, while the computations happens in a thread pool.

Closed #146, filecoin-project/rust-fil-proofs#1380
vmx added a commit that referenced this issue Feb 10, 2021
The proof computation is done asynchronously, tasks are created which are then
waited for. If such a "waiter" is in the same thread pool as the task, it can
happen that the task is waited for before it get executed. That leads to a
deadlock.

The solution is to make sure that the parts that are waiting for the computation
run on the main thread, while the computations happens in a thread pool.

Here are all the details for the interested reader (thanks @cryptonemo for this writeup):

The root of the problem is in both the design of the Worker, and how it's used in the code currently:

https://github.com/filecoin-project/bellperson/blob/d2e88544efb9876bc96c3d7e26a527943c0fdb68/src/multicore.rs#L35

In particular, when a Worker is constrained to a single thread pool, it's possible that compute is called (https://github.com/filecoin-project/bellperson/blob/d2e88544efb9876bc96c3d7e26a527943c0fdb68/src/multicore.rs#L44), which in turns requests a 'spawn' on the THREAD_POOL, and then returns a Waiter, where a caller waits for a message receipt (sent at some point in the future from the executed body of the spawned closure).

There exists a race when the Worker is constrained to the same thread pool such that a request can be made to spawn (via compute) and is requested to be entered into the rayon thread pool, but the thread is not yet started.  Instead the waiter is returned (perhaps in the same thread) and the receive is called before the worker thread can be executed.  This is often easily observable in a thread pool with a single thread (BELLMAN_NUM_CPUS=1), even when running the repo's tests normally.

Try this to attempt to observe the hang on the `master` branch of https://github.com/filecoin-project/bellperson (currently at commit `d2e88544efb9876bc96c3d7e26a527943c0fdb68`):

```
BELLMAN_NUM_CPUS=1 RUSTFLAGS="-C target-cpu=native" RUST_LOG=debug cargo test --release --all --workspace -- --nocapture
```

I was able to isolate this case in the bellman code itself with added logging, confirming this flow as problematic, and @vmx made a simplified repo that also clearly demonstrates the issue:

https://github.com/vmx/bellman_deadlock

In order for this to happen on hardware with more threads/cores, this should be a much more rare occurence because each thread will have to do exactly this behaviour in sequence (i.e. all threads call spawn, and all threads then start waiting before any of them start).  This is a race condition that is unavoidable when the Waiter's are constrained to the same thread pool as the Workers.

Practically speaking, this condition is created here:

https://github.com/filecoin-project/bellperson/blob/d2e88544efb9876bc96c3d7e26a527943c0fdb68/src/groth16/prover.rs#L276

Calling `THREAD_POOL.install` causes the `create_proof_batch_priority` function to block, while the inner method is executed entirely within the context of the THREAD_POOL.  Inside of this method, `multiexp` is called a number of times, which in turn calls `compute` and later the `wait` methods, required for this behaviour to lockup.  Again, given that the `compute` and `wait`s exist in the same pool, the potential race condition exists in this context.

Possible solutions:

Given the nature of this problem, conceptually the 'fix' is that Waiters should be waiting in a separate thread pool than the compute Workers.

We observed that instead of re-factoring in a new thread pool for this, the `THREAD_POOL.install` line mentioned above could instead execute all of the compute operations in a blocking manner (as `.install` is blocking in this usage) and the code that was previously inside of the inner method that 'wait'ed could be moved out of the inner method and into the top level caller, to be executed after the computation has completed.

A previous solution considered was removing the `THREAD_POOL.install` line at https://github.com/filecoin-project/bellperson/blob/d2e88544efb9876bc96c3d7e26a527943c0fdb68/src/groth16/prover.rs#L276 and replacing it simply with a direct function call to `create_proof_batch_priority_inner`.  By doing this, all of the multiexp calls would still called `compute`, which adds all long running tasks to the THREAD_POOL internally, allowing the waits to not be constrained to that same thread pool.  This has tested to work properly, however it should be noted that it has the unfortunate side-effect of <i>unconstraining</i> parallelism from the pool in other rayon parallel cases, such as used here: https://github.com/filecoin-project/bellperson/blob/d2e88544efb9876bc96c3d7e26a527943c0fdb68/src/groth16/prover.rs#L291

Closes #146, filecoin-project/rust-fil-proofs#1380
cryptonemo pushed a commit that referenced this issue Feb 16, 2021
The proof computation is done asynchronously, tasks are created which are then
waited for. If such a "waiter" is in the same thread pool as the task, it can
happen that the task is waited for before it get executed. That leads to a
deadlock.

The solution is to make sure that the parts that are waiting for the computation
run on the main thread, while the computations happens in a thread pool.

Here are all the details for the interested reader (thanks @cryptonemo for this writeup):

The root of the problem is in both the design of the Worker, and how it's used in the code currently:

https://github.com/filecoin-project/bellperson/blob/d2e88544efb9876bc96c3d7e26a527943c0fdb68/src/multicore.rs#L35

In particular, when a Worker is constrained to a single thread pool, it's possible that compute is called (https://github.com/filecoin-project/bellperson/blob/d2e88544efb9876bc96c3d7e26a527943c0fdb68/src/multicore.rs#L44), which in turns requests a 'spawn' on the THREAD_POOL, and then returns a Waiter, where a caller waits for a message receipt (sent at some point in the future from the executed body of the spawned closure).

There exists a race when the Worker is constrained to the same thread pool such that a request can be made to spawn (via compute) and is requested to be entered into the rayon thread pool, but the thread is not yet started.  Instead the waiter is returned (perhaps in the same thread) and the receive is called before the worker thread can be executed.  This is often easily observable in a thread pool with a single thread (BELLMAN_NUM_CPUS=1), even when running the repo's tests normally.

Try this to attempt to observe the hang on the `master` branch of https://github.com/filecoin-project/bellperson (currently at commit `d2e88544efb9876bc96c3d7e26a527943c0fdb68`):

```
BELLMAN_NUM_CPUS=1 RUSTFLAGS="-C target-cpu=native" RUST_LOG=debug cargo test --release --all --workspace -- --nocapture
```

I was able to isolate this case in the bellman code itself with added logging, confirming this flow as problematic, and @vmx made a simplified repo that also clearly demonstrates the issue:

https://github.com/vmx/bellman_deadlock

In order for this to happen on hardware with more threads/cores, this should be a much more rare occurence because each thread will have to do exactly this behaviour in sequence (i.e. all threads call spawn, and all threads then start waiting before any of them start).  This is a race condition that is unavoidable when the Waiter's are constrained to the same thread pool as the Workers.

Practically speaking, this condition is created here:

https://github.com/filecoin-project/bellperson/blob/d2e88544efb9876bc96c3d7e26a527943c0fdb68/src/groth16/prover.rs#L276

Calling `THREAD_POOL.install` causes the `create_proof_batch_priority` function to block, while the inner method is executed entirely within the context of the THREAD_POOL.  Inside of this method, `multiexp` is called a number of times, which in turn calls `compute` and later the `wait` methods, required for this behaviour to lockup.  Again, given that the `compute` and `wait`s exist in the same pool, the potential race condition exists in this context.

Possible solutions:

Given the nature of this problem, conceptually the 'fix' is that Waiters should be waiting in a separate thread pool than the compute Workers.

We observed that instead of re-factoring in a new thread pool for this, the `THREAD_POOL.install` line mentioned above could instead execute all of the compute operations in a blocking manner (as `.install` is blocking in this usage) and the code that was previously inside of the inner method that 'wait'ed could be moved out of the inner method and into the top level caller, to be executed after the computation has completed.

A previous solution considered was removing the `THREAD_POOL.install` line at https://github.com/filecoin-project/bellperson/blob/d2e88544efb9876bc96c3d7e26a527943c0fdb68/src/groth16/prover.rs#L276 and replacing it simply with a direct function call to `create_proof_batch_priority_inner`.  By doing this, all of the multiexp calls would still called `compute`, which adds all long running tasks to the THREAD_POOL internally, allowing the waits to not be constrained to that same thread pool.  This has tested to work properly, however it should be noted that it has the unfortunate side-effect of <i>unconstraining</i> parallelism from the pool in other rayon parallel cases, such as used here: https://github.com/filecoin-project/bellperson/blob/d2e88544efb9876bc96c3d7e26a527943c0fdb68/src/groth16/prover.rs#L291

Closes #146, filecoin-project/rust-fil-proofs#1380
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant