-
Notifications
You must be signed in to change notification settings - Fork 376
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process messages from different peers in parallel in PeerManager. #1023
Changes from all commits
7c8b098
a731efc
b155052
a5adda1
4f50a94
97711ae
f909831
ae4ceb7
eb17464
96fc0f3
b222be2
101bcd8
45c1411
cc7f859
46009a5
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -362,3 +362,5 @@ fn read_write_lockorder_fail() { | |
let _a = a.write().unwrap(); | ||
} | ||
} | ||
|
||
pub type FairRwLock<T> = RwLock<T>; |
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -113,3 +113,5 @@ impl<T> RwLock<T> { | |
Err(()) | ||
} | ||
} | ||
|
||
pub type FairRwLock<T> = RwLock<T>; |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
use std::sync::{TryLockResult, LockResult, RwLock, RwLockReadGuard, RwLockWriteGuard}; | ||
use std::sync::atomic::{AtomicUsize, Ordering}; | ||
|
||
/// Rust libstd's RwLock does not provide any fairness guarantees (and, in fact, when used on | ||
/// Linux with pthreads under the hood, readers trivially and completely starve writers). | ||
/// Because we often hold read locks while doing message processing in multiple threads which | ||
/// can use significant CPU time, with write locks being time-sensitive but relatively small in | ||
/// CPU time, we can end up with starvation completely blocking incoming connections or pings, | ||
/// especially during initial graph sync. | ||
/// | ||
/// Thus, we need to block readers when a writer is pending, which we do with a trivial RwLock | ||
/// wrapper here. Its not particularly optimized, but provides some reasonable fairness by | ||
/// blocking readers (by taking the write lock) if there are writers pending when we go to take | ||
/// a read lock. | ||
pub struct FairRwLock<T> { | ||
lock: RwLock<T>, | ||
waiting_writers: AtomicUsize, | ||
} | ||
|
||
impl<T> FairRwLock<T> { | ||
pub fn new(t: T) -> Self { | ||
Self { lock: RwLock::new(t), waiting_writers: AtomicUsize::new(0) } | ||
} | ||
|
||
// Note that all atomic accesses are relaxed, as we do not rely on the atomics here for any | ||
// ordering at all, instead relying on the underlying RwLock to provide ordering of unrelated | ||
// memory. | ||
pub fn write(&self) -> LockResult<RwLockWriteGuard<T>> { | ||
self.waiting_writers.fetch_add(1, Ordering::Relaxed); | ||
let res = self.lock.write(); | ||
self.waiting_writers.fetch_sub(1, Ordering::Relaxed); | ||
res | ||
} | ||
|
||
pub fn try_write(&self) -> TryLockResult<RwLockWriteGuard<T>> { | ||
self.lock.try_write() | ||
} | ||
|
||
pub fn read(&self) -> LockResult<RwLockReadGuard<T>> { | ||
if self.waiting_writers.load(Ordering::Relaxed) != 0 { | ||
let _write_queue_lock = self.lock.write(); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think a comment here would be nice explaining the idea of initiating a wait for a write lock before a new read lock is accessed. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The current mechanism seems to achieve that whenever there are pending write locks, a new read lock cannot be added until those pending write locks complete. However, if there are already pending read locks, and a new write lock gets added, it will have to wait. Have you considered ways to jump the line, so to speak, and would that be a desirable property? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Jumping the line would break the fairness property - if there's waiting writers, and we allow new readers to take the read lock, then those new readers may cause the waiting writer to wait even longer, which is the issue we're trying to solve to begin with. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As for a comment, I'm not sure what to add to the existing one that's in the struct docs:
|
||
} | ||
// Note that we don't consider ensuring that an underlying RwLock allowing writers to | ||
// starve readers doesn't exhibit the same behavior here. I'm not aware of any | ||
// libstd-backing RwLock which exhibits this behavior, and as documented in the | ||
// struct-level documentation, it shouldn't pose a significant issue for our current | ||
// codebase. | ||
self.lock.read() | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think fair is somewhat subjective. In this case, it's fair because it prioritizes writing. Should that perhaps be reflected in the name, though I don't really have good suggestions? Like WritePreferenceRwLock? WriterPriorityRwLock? Open to suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean its fair in the traditional sense that there is no starvation. Writers won't starve readers, either, I believe, as long as the underlying native RwLock doesn't allow writers to starve other writers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose that's not entirely true, if the underlying RwLock allows writers to starve readers we will still exhibit that behavior here, but I don't think any do, so for now its probably fine, will add a comment.