Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Networking MVP: Refactor + extend functionality #1353

Closed
17 tasks done
alrevuelta opened this issue Nov 8, 2022 · 5 comments
Closed
17 tasks done

Networking MVP: Refactor + extend functionality #1353

alrevuelta opened this issue Nov 8, 2022 · 5 comments
Assignees

Comments

@jm-clius
Copy link
Contributor

jm-clius commented Nov 9, 2022

select peers for service protocols (e.g. store, filter, lightpush), while prioritising peers that were manually added for these protocols (e.g. via --storenode, --filternode, --lightpushnode) options

Note that this was a previous requirement. Recently we have a proposed API change where service peers can be explicitly included in API calls for service protocols (implying that they must be managed/kept by the application). This would obsolete the --storenode, --filternode, etc. options.

maintain the connected peers in between that thresholds

Related to previous comment, but perhaps worthwhile separating this idea between maintaining managed peers & maintaining connections (i.e. the conn manager should/could have some backup peer options when a connection dies)

We currently discover and connect to all nodes naively (see runDiscv5Loop). We need i) to filter according to some criteria and ii) to decouple discovery from connecting.

Agree on both points, although (i) may just be "supports Waku" as a start.

AFAIK we have no logic for handling peer disconnection

Not sure I follow? Max peer connections is a nim-libp2p setting and peer disconnects are handled there and don't count towards the total connections. We've also registered a callback to handle disconnects on our side and update the state in the current Waku Peer Store.

We may want to add an inbound/outbound balance

Indeed. Mentioned before, but worthwhile to also look at what has been done for nimbus.

We also may want to keep a given balance of connected peers with certain capabilities. i.e. we want that x% of our connections are to peers that support store protocol.

Yes! :)

The PeerManager should also prune peers from the PeerStore from time to time, according to some criteria.

Indeed. And this does indeed seem like you're considering connection management and peer management as logically separate (apologies for being unclear up to this point :D ). Sorting can be very naive in the beginning (what little we have of scoring, source, ability to connect, etc.) and then we just periodically prune the ones at the bottom when we reach capacity.

@jm-clius
Copy link
Contributor

jm-clius commented Nov 9, 2022

Also cc @LNSD in terms of peer management design requirements from his POV.

@alrevuelta
Copy link
Contributor Author

alrevuelta commented Nov 11, 2022

Related to previous comment, but perhaps worthwhile separating this idea between maintaining managed peers & maintaining connections (i.e. the conn manager should/could have some backup peer options when a connection dies)

My initial thought was to have both peer-management and connection-management components as part of the PeerManager, at least for the first iteration. These are my definitions:

  • peer-management: manages the peerstore by removenig peers that don't meet some criteria, like not supporting waku, missbehaving, or simply because there is no more space. We can't store infinite peers.
  • connection-management: manages the connections, trying to keep the number of connection between a min and a max. When low on connections it finds a peer from the peerstore and connects to it. It also allows to directly dial a given peer and has to handle disconnection from another if we already reached the max number of connections. It also updates the peerstore with connection related metrics (such as the existing Connectedness). In general, tries to keep healthy connections, monitoring also existing ones and dropping according to some criteria.

AFAIK we have no logic for handling peer disconnection

Not sure I follow? Max peer connections is a nim-libp2p setting and peer disconnects are handled there and don't count towards the total connections. We've also registered a callback to handle disconnects on our side and update the state in the current Waku Peer Store.

My bad, I'm refering to peer disconnections "on purpose". "triggering" instead of "handling". So disconnections that we explicitly perform, because i) we are no longer interested in that peer or ii) we reached the maximum connections and we want to connect to a given peer. Perhaps ii) is too specific.

And this does indeed seem like you're considering connection management and peer management as logically separate

Yep, I consider them as logically separate but right now I think they can go in the same module.

@fryorcraken
Copy link
Collaborator

@alrevuelta there seems to be only one TODO remaining. Is that accurate? When is the work planned?

@alrevuelta
Copy link
Contributor Author

@fryorcraken The TODO was an old one, removing it. I can consider this as complete since it provides the functionalities that were planed. More peer management and improvements on networking will be added in the future, but they will be tracked with their own issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

3 participants