Skip to content

Commit

Permalink
Create a simple FairRwLock to avoid readers starving writers
Browse files Browse the repository at this point in the history
Because we handle messages (which can take some time, persisting
things to disk or validating cryptographic signatures) with the
top-level read lock, but require the top-level write lock to
connect new peers or handle disconnection, we are particularly
sensitive to writer starvation issues.

Rust's libstd RwLock does not provide any fairness guarantees,
using whatever the OS provides as-is. On Linux, pthreads defaults
to starving writers, which Rust's RwLock exposes to us (without
any configurability).

Here we work around that issue by blocking readers if there are
pending writers, optimizing for readable code over
perfectly-optimized blocking.
  • Loading branch information
TheBlueMatt committed Nov 23, 2021
1 parent 97435c3 commit bacc578
Show file tree
Hide file tree
Showing 3 changed files with 57 additions and 3 deletions.
7 changes: 4 additions & 3 deletions lightning/src/ln/peer_handler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,15 @@ use ln::peer_channel_encryptor::{PeerChannelEncryptor,NextNoiseStep};
use ln::wire;
use ln::wire::Encode;
use util::atomic_counter::AtomicCounter;
use util::fairrwlock::FairRwLock;
use util::events::{MessageSendEvent, MessageSendEventsProvider};
use util::logger::Logger;
use routing::network_graph::{NetworkGraph, NetGraphMsgHandler};

use prelude::*;
use io;
use alloc::collections::LinkedList;
use sync::{Arc, Mutex, MutexGuard, RwLock};
use sync::{Arc, Mutex, MutexGuard};
use core::sync::atomic::{AtomicUsize, Ordering};
use core::{cmp, hash, fmt, mem};
use core::ops::Deref;
Expand Down Expand Up @@ -413,7 +414,7 @@ pub struct PeerManager<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, L: De
L::Target: Logger,
CMH::Target: CustomMessageHandler {
message_handler: MessageHandler<CM, RM>,
peers: RwLock<PeerHolder<Descriptor>>,
peers: FairRwLock<PeerHolder<Descriptor>>,
/// Only add to this set when noise completes.
/// Locked *after* peers. When an item is removed, it must be removed with the `peers` write
/// lock held. Entries may be added with only the `peers` read lock held (though the
Expand Down Expand Up @@ -525,7 +526,7 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, L: Deref, CMH: Deref> P

PeerManager {
message_handler,
peers: RwLock::new(PeerHolder {
peers: FairRwLock::new(PeerHolder {
peers: HashMap::new(),
}),
node_id_to_descriptor: Mutex::new(HashMap::new()),
Expand Down
52 changes: 52 additions & 0 deletions lightning/src/util/fairrwlock.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
#[cfg(feature = "std")]
mod rwlock {
use std::sync::{TryLockResult, LockResult, RwLock, RwLockReadGuard, RwLockWriteGuard};
use std::sync::atomic::{AtomicUsize, Ordering};

/// Rust libstd's RwLock does not provide any fairness guarantees (and, in fact, when used on
/// Linux with pthreads under the hood, readers trivially and completely starve writers).
/// Because we often hold read locks while doing message processing in multiple threads which
/// can use significant CPU time, with write locks being time-sensitive but relatively small in
/// CPU time, we can end up with starvation completely blocking incoming connections or pings,
/// especially during initial graph sync.
///
/// Thus, we need to block readers when a writer is pending, which we do with a trivial RwLock
/// wrapper here. Its not particularly optimized, but provides some reasonable fairness by
/// blocking readers (by taking the write lock) if there are writers pending when we go to take
/// a read lock.
pub struct FairRwLock<T> {
lock: RwLock<T>,
waiting_writers: AtomicUsize,
}

impl<T> FairRwLock<T> {
pub fn new(t: T) -> Self {
Self { lock: RwLock::new(t), waiting_writers: AtomicUsize::new(0) }
}

pub fn write(&self) -> LockResult<RwLockWriteGuard<T>> {
self.waiting_writers.fetch_add(1, Ordering::AcqRel);
let res = self.lock.write();
self.waiting_writers.fetch_sub(1, Ordering::AcqRel);
res
}

pub fn try_write(&self) -> TryLockResult<RwLockWriteGuard<T>> {
self.lock.try_write()
}

pub fn read(&self) -> LockResult<RwLockReadGuard<T>> {
if self.waiting_writers.load(Ordering::Acquire) != 0 {
let _write_queue_lock = self.lock.write();
}
self.lock.read()
}
}
}
#[cfg(feature = "std")]
pub use self::rwlock::*;

#[cfg(not(feature = "std"))]
pub type FairRwLock<T> = crate::sync::RwLock<T>;


1 change: 1 addition & 0 deletions lightning/src/util/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ pub mod message_signing;
pub(crate) mod atomic_counter;
pub(crate) mod byte_utils;
pub(crate) mod chacha20;
pub(crate) mod fairrwlock;
#[cfg(feature = "fuzztarget")]
pub mod zbase32;
#[cfg(not(feature = "fuzztarget"))]
Expand Down

0 comments on commit bacc578

Please sign in to comment.