You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we obtain a read lock while hashing to see if the hash is already computed. If it isn't then we drop the read lock, compute the hash and then re-obtain a write lock to store the computed hash. This leads to a race condition in which work can be duplicated if two threads try to read and compute the hash for a node at the same time.
I think using an upgradable RwLock allows us to prevent work being duplicated, using this algorithm:
Obtain an upgradable read lock
If the hash is already computed (!= 0), return it
Upgrade the lock to a write lock
Check again whether the hash is already computed, if so, return it
Compute the hash
Write the hash using the write lock, release
Step (4) is slightly counter-intuitive but is necessary to prevent this interleaving:
A1, A2, B1, B2, B3, B5, B6, A3, A5, A6
i.e. the case where thread B computes the hash while A is waiting for the write lock at step (3).
The text was updated successfully, but these errors were encountered:
Currently we obtain a read lock while hashing to see if the hash is already computed. If it isn't then we drop the read lock, compute the hash and then re-obtain a write lock to store the computed hash. This leads to a race condition in which work can be duplicated if two threads try to read and compute the hash for a node at the same time.
I think using an upgradable RwLock allows us to prevent work being duplicated, using this algorithm:
Step (4) is slightly counter-intuitive but is necessary to prevent this interleaving:
A1, A2, B1, B2, B3, B5, B6, A3, A5, A6
i.e. the case where thread B computes the hash while A is waiting for the write lock at step (3).
The text was updated successfully, but these errors were encountered: