-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(networking): prune peers from peerstore exceeding capacity #1513
Conversation
5c7e52f
to
c25ab02
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, check my comments
prunned += 1 | ||
|
||
# if we still need to prune, prune peers that are not connected | ||
let notConnecteed = pm.peerStore.peers.filterIt(it.connectedness != Connected).mapIt(it.peerId) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid exposing internal members of the types (the peers
property in this case) and add accessor functions like the following:
let notConnecteed = pm.peerStore.peers.filterIt(it.connectedness != Connected).mapIt(it.peerId) | |
let notConnecteed = pm.peerStore.getNotConnectedPeers().mapIt(it.peerId) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good one thanks, fixed in 3df173c
|
||
# if we still need to prune, prune peers that are not connected | ||
let notConnecteed = pm.peerStore.peers.filterIt(it.connectedness != Connected).mapIt(it.peerId) | ||
for peerId in notConnecteed: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo:
for peerId in notConnecteed: | |
for peerId in notConnected: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed thanks 3df173c
await sleepAsync(PrunePeerStoreInterval) | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Split things into sub-procedures so you can return
instead of repeating the sleepAsync(...)
code. That will make the code more readable.
The sub-procedure should be a linear function wrapped and called by the periodic execution function. Mixing the loop code with the linear code makes things more complex to understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
error "Max number of connections can't be greater than PeerManager capacity", | ||
capacity = capacity, | ||
maxConnections = maxConnections | ||
doAssert(false, "Max number of connections can't be greater than PeerManager capacity") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Raising a Defect
is the preferred way to finish a program in Nim. It is the panic
equivalent of Go or Rust.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, i think i saw doAssert somewhere in libp2p and nimbus, but switched to raising Defect here.
3df173c
added also a test case, since its important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
let peersToPrune = numPeers - capacity | ||
|
||
# prune peers with too many failed attempts | ||
var prunned = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
var prunned = 0 | |
var pruned = 0 |
:)
prunned += 1 | ||
|
||
# if we still need to prune, prune peers that are not connected | ||
let notConnecteed = pm.peerStore.peers.filterIt(it.connectedness != Connected).mapIt(it.peerId) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checking my understanding: because maxConnections <= capacity we should always be able to reach our peersToPrune
quota?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exactly!
slightly related, since enforcing this maxConnections <= capacity is quite important added a unit test for this here a94325a, otherwise, that wont be true.
f2fc91b
to
a94325a
Compare
@LNSD comments were addressed, mind checking and approving if ok? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks ✅
Closes #1504
Summary: Every
PrunePeerStoreInterval
the peerstore is pruned, removing peers from it if we have exceeded its capacity. The criteria is the following: peers that failed in the past are removed first, and if we are still over the capacity, not connected peers are removed.Other changes: