-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DHT API #11
Conversation
Rebased on master. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial feedback.
ma "github.com/multiformats/go-multiaddr" | ||
) | ||
|
||
var BootstrapPeers = []string{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if the application wanted a DHT separate from ours? I'm sure that'll be the case of Ethereum. The Ethereum Foundation runs their own bootstrap nodes per network:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a related note, what if we had more than one app behind this daemon, and each wanted a different DHT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we should be able to support different bootstrap peers.
I think what we want is a configuration file of sorts for the daemon profile, and there we can list bootstrap peers there.
But I left that for a subsequent pr (we can open follow up issue).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I'd like to see an application "opening a session" with a daemon, and providing any initial seed configuration as part of that handshake.
I think of the daemon as a service provided by the environment/OS for applications to use, versus a sidecar for applications. As such, multi-tenancy and app-specific configuration should be part of the protocol.
For now, it's good that it's on our radar. An issue would be great, and perhaps a note in the specs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think we can borrow the Options
style configuration from go-libp2p
and have a default bootstrap configuration with our peers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's not a bad idea at all.
Upon further thought I'll add a -bootstrapPeers
flag so that we can pass a comma separated list of peers in the command line, so that we can punt on the configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a command-line flag in bd604c4
<-ticker.C | ||
|
||
conns := d.host.Network().Conns() | ||
if len(conns) >= BootstrapConnections { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit confusing to me, as it makes me think this goroutine is going to "pin/keep connections to bootstrap peers". But in practice we don't care if the active connections are to bootstrap peers or not.
If this is intended as a low watermark check for DHT, we should also verify that the returned conns support DHT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's a low watermark check mostly for the dht -- but you may want to keep bootstrap connections (say for relay, observed address identification, etc) without it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhaps we should earmark this as something that could/should be made into a connection manager?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either that or move it into the dht itself, as it seems useful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could make a connection manager, but for now it's well isolated in its corner of the code :)
defer cancel() | ||
|
||
for _, pi := range pis { | ||
if d.host.Network().Connectedness(pi.ID) == inet.Connected { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this check is redundant, because Host#Connect()
already performs it as part of its contract.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not -- we want to skip already connected peers otherwise we will connect to fewer peers than intended.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, right, my bad!
return nil | ||
} | ||
|
||
func (d *Daemon) connectBootstrapPeers(pis []*pstore.PeerInfo, toconnect int) int { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe validate that len(pis) >= toconnect
? Just a sanity check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah, if it is less we'll just connect to all of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah, the loop is a range loop over peers 👍
optional int64 timeout = 7; | ||
} | ||
|
||
message DHTResponse { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need a token that identifies the DHT request uniquely. Then all streaming responses would refer to that token, and the END
response would close the life of the request. Currently it seems that responses for parallel DHT requests could get intermixed on the wire.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about it too and concluded that request multiplexing is more trouble than it's worth.
If we want parallel queries, we can open another connection and issue the query there.
unix sockets are relatively cheap, just a transient file descriptor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that makes sense, although it complicates the client because it'll either need to sync to allow only 1 concurrent request (bad), or it'll need to maintain a pool of client instances (too complicated), because we don't want each request to create a new client...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, i think the client can handle this with multiple connections. we definitely don't want to reinvent stream multiplexing at another layer in the stack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can create a new socket for each request, that's simple enough.
ma "github.com/multiformats/go-multiaddr" | ||
) | ||
|
||
var BootstrapPeers = []string{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think we can borrow the Options
style configuration from go-libp2p
and have a default bootstrap configuration with our peers
<-ticker.C | ||
|
||
conns := d.host.Network().Conns() | ||
if len(conns) >= BootstrapConnections { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhaps we should earmark this as something that could/should be made into a connection manager?
<-ticker.C | ||
|
||
conns := d.host.Network().Conns() | ||
if len(conns) >= BootstrapConnections { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either that or move it into the dht itself, as it seems useful
optional int64 timeout = 7; | ||
} | ||
|
||
message DHTResponse { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, i think the client can handle this with multiple connections. we definitely don't want to reinvent stream multiplexing at another layer in the stack.
} | ||
|
||
if ch != nil { | ||
for res := range ch { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to block on this response? is there any sane way it could be async? i suppose that would require per client connection locking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No easy way to make this async without reinventing multiplexing.
@@ -189,6 +224,21 @@ func (d *Daemon) doStreamHandler(req *pb.Request) *pb.Response { | |||
return okResponse() | |||
} | |||
|
|||
func (d *Daemon) doListPeers(req *pb.Request) *pb.Response { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
p2pd/main.go
Outdated
) | ||
|
||
func main() { | ||
identify.ClientVersion = "p2pd" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm should we get a version string in there? something like fmt.Sprintf("/p2pd/%d", DaemonVersion)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's probably a good idea. But what is our current version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0.1!
agree w/ all your responses, just wanted to earmark some stuff for later perhaps |
also, bootstrap to have a working DHT and LIST_PEERS to see our peers.