-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Filter-out nodes.json #7716
Filter-out nodes.json #7716
Conversation
util/network/src/node_table.rs
Outdated
let nodes = node_ids.into_iter() | ||
.map(|id| self.nodes.get(&id).expect("self.nodes() only returns node IDs from self.nodes")) | ||
.map(|node| node.clone()) | ||
.filter(|node| if len > MAX_NODES { node.last_attempted.is_some() } else { true }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should change the nodes sorting so that nodes that we never connected to have a lower priority and then we .take(MAX_NODES)
on the sorted list. This way we always get a node table with a predictable size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like a good idea, we might need to tune the comparator a bit though. Might be difficult to decide what nodes are more valuable: never connected or connected once, but with a single failure.
In general I need to double check if we reset failures
counter in case we successfuly connect to the node, otherwise valuable peers might be dropped to often (with time they would accumulate a lot of failures, so the preference would be to completely new nodes).
I've updated this to remove the I also noticed that my |
Here's what my
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Andre's changes look good to me.
util/network/src/lib.rs
Outdated
|
||
#[cfg(test)] | ||
extern crate tempdir; | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Superflous.
util/network/src/node_table.rs
Outdated
if self.attempts == 0 { | ||
DEFAULT_FAILURE_PERCENTAGE | ||
} else { | ||
((self.failures as f64 / self.attempts as f64 * 100.0 / 5.0).round() * 5.0) as usize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need to fallback to floats? What about:
self.failures * 10_000 / self.attempts
(then we get a number within 0 .. 10_000
representing percents with two decimal places)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, tbh I don't really care about the decimal places so it can just be self.failures * 100 / self.attempts
. I'm using floats for rounding into buckets of 5% but integer math should do.
util/network/src/node_table.rs
Outdated
.collect(); | ||
refs.sort_by(|a, b| { | ||
let mut ord = a.failure_percentage().cmp(&b.failure_percentage()); | ||
if ord == Ordering::Equal { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a.failure_percentage().cmp(&b.failure_percentage())
.then_with(|| a.failures.cmp(&b.failures))
.then_with(|| b.attempts.cmp(&a.attempts)
pub failures: u32, | ||
pub last_attempted: Option<Tm>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, I didn't notice it's not used at all.
@arkpar Could you have a look if changes here make sense to you? |
Does this completely fix #7697 (and other duplicate issues)? |
LGTM |
* Filter-out nodes.json * network: sort node table nodes by failure ratio * network: fix node table tests * network: fit node failure percentage into buckets of 5% * network: consider number of attempts in sorting of node table * network: fix node table grumbles
* Filter-out nodes.json * network: sort node table nodes by failure ratio * network: fix node table tests * network: fit node failure percentage into buckets of 5% * network: consider number of attempts in sorting of node table * network: fix node table grumbles
* Filter-out nodes.json (#7716) * Filter-out nodes.json * network: sort node table nodes by failure ratio * network: fix node table tests * network: fit node failure percentage into buckets of 5% * network: consider number of attempts in sorting of node table * network: fix node table grumbles * Fix client not being dropped on shutdown (#7695) * parity: wait for client to drop on shutdown * parity: fix grumbles in shutdown wait * parity: increase shutdown timeouts * Wrap --help output to 120 characters (#7626) * Update Clap dependency and remove workarounds * WIP * Remove line breaks in help messages for now * Multiple values can only be separated by commas (closes #7428) * Grumbles; refactor repeating code; add constant * Use a single Wrapper rather than allocate a new one for each call * Wrap --help to 120 characters rather than 100 characters
* Filter-out nodes.json (#7716) * Filter-out nodes.json * network: sort node table nodes by failure ratio * network: fix node table tests * network: fit node failure percentage into buckets of 5% * network: consider number of attempts in sorting of node table * network: fix node table grumbles * Fix client not being dropped on shutdown (#7695) * parity: wait for client to drop on shutdown * parity: fix grumbles in shutdown wait * parity: increase shutdown timeouts * Wrap --help output to 120 characters (#7626) * Update Clap dependency and remove workarounds * WIP * Remove line breaks in help messages for now * Multiple values can only be separated by commas (closes #7428) * Grumbles; refactor repeating code; add constant * Use a single Wrapper rather than allocate a new one for each call * Wrap --help to 120 characters rather than 100 characters
Deleting nodes.json fixed the issue for me. |
If we reach more than 1024 nodes in the table, filter out the ones that we never attempted to connect to.
(untested yet, need to get better internet connection to test it properly)