Skip to content
This repository has been archived by the owner on Jun 26, 2023. It is now read-only.

RFC: async iterator based discovery #27

Closed
alanshaw opened this issue Oct 30, 2018 · 10 comments
Closed

RFC: async iterator based discovery #27

alanshaw opened this issue Oct 30, 2018 · 10 comments
Assignees

Comments

@alanshaw
Copy link
Member

I've been exploring what an async iterator based discovery interface might look like. The following gist is a rough sketch (very unlikely to actually run) of what it could be like.

TLDR: it's just an (async) iterator that yields PeerInfo instances.

In the gist I've explored what it might be like to use these in a libp2p node, which would need for them to be abortable (stop when the libp2p node stops) and recoverable (when they error, restart them).

To be abortable I've used an AbortController which is used by the fetch API in the browser. This is just one way of giving us the ability to halt async iteration from the outside (it's kinda cool IMHO). We could use this to implement abortable methods throughout libp2p.

We should talk about what happens when we recieve a PeerInfo from a discovery module and how we can communicate that to where it needs to be.

https://gist.github.com/alanshaw/ff657468939a1795aa5b9ac626121c48

@libp2p/javascript-team I'd love to hear your thoughts on this! 🚀 ✨


refs ipfs/js-ipfs#1670

@jacobheun
Copy link
Contributor

@alanshaw I like the approach. For consumers of libp2p I think we could expose a generator function for the peers, which could also be the thing that actually starts the discovery. Currently if discovery is enabled we start it on libp2p startup. If we went with the generator approach, we could start the discovery services, if they're not running, when it's called. The benefit here is that if nothing is consuming the generator, we don't need to spend resources on discovery. Something like:

for await (const peer of libp2p.peerDiscovery.find()) {
  // Do something with the discovered peer
}

instead of

libp2p.on('peer:discovery', (peer) => {
  // Do something with the discovered peer
})

The find method would include the iterator merge logic from your gist. This would also enable us to have libp2p.peerDiscovery.abort(). As we've talked about better support for starting and stopping services in libp2p, this could be a step in direction.

@alanshaw
Copy link
Member Author

alanshaw commented Dec 4, 2018

Worth a look for more inspiration/learnings 😄 https://github.com/mozilla/libdweb/blob/master/Readme.md#service-discovery-api

@vasco-santos
Copy link
Member

@alanshaw thanks for your proposal for the async iterator approach, it looks good! And @jacobheun proposal is great too 👍

@dirkmc
Copy link
Contributor

dirkmc commented Mar 22, 2019

To be honest peer discovery feels like a different paradigm to me.

When reading from a connection, it's typically a single consumer pulling data out of a pipe continuously. When a peer is discovered, it's a discrete event that multiple parties may be interested in, potentially only for a limited period of time.

Either situation can be implemented with Iterators or Events, but my sense is that the connection case is more naturally implemented with Iterators, and the peer discovery case is more naturally implemented with Events.

@dirkmc
Copy link
Contributor

dirkmc commented Mar 22, 2019

If I were to make a change to the discovery interface, it would be to add a method for retrieving all peers that the discovery source already knows about, eg when connecting to the the rendezvous server, I first want to retrieve all the other peers that are already connected to the server, and then listen for any new peers that come in.

@alanshaw
Copy link
Member Author

alanshaw commented Apr 9, 2019

Either situation can be implemented with Iterators or Events, but my sense is that the connection case is more naturally implemented with Iterators, and the peer discovery case is more naturally implemented with Events.

In hindsight, and now that I know a lot more about libp2p I think you're right. I found it a bit awkward to work with iterators here https://github.com/mozilla/libdweb/blob/master/Readme.md#service-discovery-api and I think the same could be said for this.

@vasco-santos
Copy link
Member

Good point raised here!

Thinking better about this after some work using async iterators for the libp2p-daemon, I also agree with @dirkmc . In this case, I also feel that iterating here is strange and events would be more natural to the consumer of the discovery services!

I think that the API should be exactly the same as it already is, removing the callbacks in favor of async!

@mcollina
Copy link

mcollina commented Apr 9, 2019

Overall, I would recommend using async iterators instead of streams, mainly because of the simplified error handling and auto-destroy logic. However, I would recommend using them only if it is good to pause the source until that one or more elements have been processed.

@kumavis
Copy link
Contributor

kumavis commented May 8, 2019

one complaint on event emitters: error handling. if an unhandled error occurs synchronously in an event handler, it bubbles up to the emitter of the event, interupting that flow and preventing other handlers from being called.

this is a bit nuanced, some would argue this is good and correct, but i think it results in errors propagating into larger errors instead of the failure being contained to one component of an app. as an analogy, better to let a process crash than kernel panic the whole system.

I havent explored async iterators enough to know what the gotcha is there

@daviddias daviddias transferred this issue from libp2p/interface-peer-discovery Feb 5, 2020
@vasco-santos
Copy link
Member

Closing this for now. When we work on the next version of the DHT, we might revisit this topic

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants