Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deterministically de-dupe mdns connections #66

Closed
gmaclennan opened this issue Feb 2, 2023 · 6 comments · Fixed by #112
Closed

Deterministically de-dupe mdns connections #66

gmaclennan opened this issue Feb 2, 2023 · 6 comments · Fixed by #112

Comments

@gmaclennan
Copy link
Member

The existing code to de-dupe connections can result in a race condition where peers do not agree which one of a duplicated connection they should close. We should do this deterministically, probably borrowing from how hyperswarm does it.

@gmaclennan
Copy link
Member Author

In addition to de-duping peers on the mdns connection (e.g. when two peers try to connect to each other at the same time), we also need to de-dupe across dht and mdns, e.g. a peer is broadcasting on both mdns and dht, and a peer tries to connect on both. We should probably always prefer the mdns (local) connection? Or does it make a difference since DHT should route locally anyway? but we need to deterministically choose one or the other.

@tomasciccola
Copy link
Contributor

tomasciccola commented Apr 17, 2023

Something I don't understand on this issue:
The idea is to avoid duplicate connections between peers (i.e. Peer A connects to Peer B, and Peer B connects to Peer A - through lets say mdns - so we get two sockets between Peer A and B, one is unnecessary).
On the code I can see that we're handling duplicate connections coming from a Peer (i.e. Peer B tries to connect to Peer A twice, for some reason)
On the on('connect') handler we make a check

if(peer){
   connection.end()
}

So that would take care of Peer B repeatedly trying to connect to A. But wouldn't that also take care of Peer A trying to connect to Peer B since all peers are dumped to the peer Map? Or the race condition you're refering to @gmaclennan means there could be a call to this._connect() close to the firing on a on('connect') handler ?

This question is besides the other issue related to always preferring mdns over dht.

@gmaclennan
Copy link
Member Author

The race condition is that each peer chooses to close a different connection, so you end up with zero connections.

@tomasciccola tomasciccola added Medium and removed Low labels Apr 18, 2023
@tomasciccola
Copy link
Contributor

so, I'm struggling to solve this issue.
Comparing to the code that you linked @gmaclennan , I can see that in addition to checking if the peer is already on the peers Map, they check if the existing peer was the initiator, and compare it to the new connection (here).
We don't do this, but in our case isInitiator is always false (when instancing the NoiseSecretStream) so we could add a check like !connection.isInitiator || remotePublicKey === this.identityPublicKey.
Also in the case of hyperswarm, the stream is destroyed (here) instead of just returning (here).
On the other hand, it seems that hyperswarm has something like a retry timer (here) on the .destroy method, but that is not called inside the class itself.
Anyways, looking at the hyperswarm code that was linked on the issue, is not obvious how to approach this...

@gmaclennan
Copy link
Member Author

I haven't looked at the code here for a while, but in hyperswarm the initiator is the peer that connects (eg that initiates the TCP connection) as opposed to the peer that receives the incoming connection.

@tomasciccola
Copy link
Contributor

tomasciccola commented May 25, 2023

Regarding deterministically choosing between mdns over dht, would it suffice to:

  1. When a this._onMdnsConnection triggers
  2. And the peer exists with discoveryType = 'dht'
  3. Replace that peer with the peer from the mdnConn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants