-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Conversation
For what it's worth, the test failure is very very likely caused by nodes trying to dial each other simultaneously. This is something that in practice only happens in a local network. |
49aedcc
to
5b0130c
Compare
Does it break compatibility? |
Libp2p is 0.3 because of some API breaking changes within libp2p itself. So no. |
I mean the network level compatibility. Would it be able to connect and communicate to 0.2 nodes? |
Yes! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question.
@@ -90,11 +73,15 @@ where TProtos: IntoIterator<Item = RegisteredProtocol> { | |||
} | |||
} | |||
|
|||
// Add the bootstrap nodes to the topology and connect to them. | |||
/*// Add external addresses. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that still required or just some leftover?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I forgot to uncomment this.
// kind of blurry. This problem will become irrelevant after | ||
// https://github.com/paritytech/substrate/issues/1517 | ||
// TODO: also, we should shut down refused substreams gracefully; this should be fixed | ||
// at the same time as the todo above |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just start in Enabled
state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Starting in the enabled state also has trade-offs, most notably slightly poorer logs if we're unlucky. However I'll make the switch.
* Update libp2p * Some more diagnostics * 30 seconds back to 5 seconds * Bump libp2p-core and improve test * Fix runtime Cargo.lock * More work * Finish upgrade to libp2p 0.3 * Add a maximum of 60 seconds for the rounds * Remove env_logger * Update Cargo.lock * Update Cargo.lock in test-runtime * Fix test compilation * Make the test pass * Add identify addresses to Kademlia * Don't connect to nodes we're already connected to * Add warning for non-Substrate nodes * Fix external address not added * Start in Enabled mode
* Update libp2p * Some more diagnostics * 30 seconds back to 5 seconds * Bump libp2p-core and improve test * Fix runtime Cargo.lock * More work * Finish upgrade to libp2p 0.3 * Add a maximum of 60 seconds for the rounds * Remove env_logger * Update Cargo.lock * Update Cargo.lock in test-runtime * Fix test compilation * Make the test pass * Add identify addresses to Kademlia * Don't connect to nodes we're already connected to * Add warning for non-Substrate nodes * Fix external address not added * Start in Enabled mode
ring
to 0.14.custom_protos
module (see Embed the topology in the NetworkBehaviour libp2p/rust-libp2p#889)I'm marking this as "WIP" because the connectivity test has some issues which I'm trying to find out.
We spawn 25 nodes and the test succeeds when all the nodes are connected to the 24 others, but there are sometimes 2 or 3 them stuck at 22 or 23 connections.