-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deterministically de-dupe mdns connections #66
Comments
In addition to de-duping peers on the mdns connection (e.g. when two peers try to connect to each other at the same time), we also need to de-dupe across dht and mdns, e.g. a peer is broadcasting on both mdns and dht, and a peer tries to connect on both. We should probably always prefer the mdns (local) connection? Or does it make a difference since DHT should route locally anyway? but we need to deterministically choose one or the other. |
Something I don't understand on this issue: if(peer){
connection.end()
} So that would take care of Peer B repeatedly trying to connect to A. But wouldn't that also take care of Peer A trying to connect to Peer B since all peers are dumped to the peer Map? Or the race condition you're refering to @gmaclennan means there could be a call to This question is besides the other issue related to always preferring mdns over dht. |
The race condition is that each peer chooses to close a different connection, so you end up with zero connections. |
so, I'm struggling to solve this issue. |
I haven't looked at the code here for a while, but in hyperswarm the initiator is the peer that connects (eg that initiates the TCP connection) as opposed to the peer that receives the incoming connection. |
Regarding deterministically choosing between mdns over dht, would it suffice to:
|
The existing code to de-dupe connections can result in a race condition where peers do not agree which one of a duplicated connection they should close. We should do this deterministically, probably borrowing from how hyperswarm does it.
The text was updated successfully, but these errors were encountered: