-
Notifications
You must be signed in to change notification settings - Fork 1.6k
availability-recovery: use IfDisconnected::TryConnect
for chunks
#6081
Conversation
Hmm, given the docs and your explanation it sounds sensible. So, this only works in our tests because all nodes are already connected? We have a zombienet test, could you please make one validator being only connected to one other validator using |
I assume you mean this test. Interesting. Last time I talked to @eskimor about req/resp protocols, our understanding was that it needs a connections specifically over |
I mean the docs are talking about peer:
What I meant with my changes is that currently on the zombienet test we are probably connected to all peers. So, when we do this request on the validation peerset, we actually already have a connection to the peer, but not yet on the validation peerset. |
Which is fine and I'm not against the fix ;) I just want to have a regression test that this behavior not is coming back by accident. |
Yes, I'll tweak the test to see if it work with 1 peer connection. |
No the peerset does not matter -as long as some connection exists, req/resp. works. |
* master: Pass through `runtime-benchmark` feature (#6110) Properly migrate weights to v2 (#6091) Buffered connection management for collator-protocol (#6022) Add unknown words (#6105) Batch vote import in dispute-distribution (#5894) Bump lru from 0.7.8 to 0.8.0 (#6060) Keep sessions in window for the full unfinalized chain (#6054)
* master: Maximum value for `MultiplierUpdate` (#6021)
bot merge |
Error: "Check reviews" status is not passing for paritytech/cumulus#1711 |
bot merge |
* master: (21 commits) try and fix build (#6170) Companion for EPM duplicate submissions (#6115) Bump docker/setup-buildx-action from 2.0.0 to 2.1.0 (#6141) companion for #12212 (#6162) Bump substrate (#6164) BlockId removal: refactor: StorageProvider (#6160) availability-recovery: use `IfDisconnected::TryConnect` for chunks (#6081) Update clap to version 4 (#6128) Add `force_open_hrmp_channel` Call (#6155) Fix fuzzing builds xcm-fuzz and erasure-coding fuzzer (#6153) BlockId removal refactor: Backend::state_at (#6149) First round of implementers guide fixes (#6146) bump zombienet version (#6142) lingua.dic is not managed by CI team (#6148) pallet-mmr: RPC and Runtime APIs work with block numbers (#6072) Separate preparation timeouts for PVF prechecking and execution (#6139) Malus: add disputed block percentage (#6100) refactor grid topology to expose more info to subsystems (#6140) Manual Para Lock (#5451) Expose node subcommands in Malus CLI (#6135) ...
I was looking at how https://github.com/paritytech/cumulus/blob/master/client/pov-recovery works and it seems to me that it doesn't unless I'm overlooking something obvious.
Here we issue a recovery request without backing fast-path, so it will try to request chunks from all validators.
But you need to be connected
overto at least 1/3 of validators for the recovery to work./validation/1
peerset, which has limited number of slots (up to 10) for non-validators. And these connection requests are only issued ingossip-support
only for validators ifIfDisconnected::ImmediateError
is used.I've changed that to
IfDisconnected::TryConnect
(only for chunk requests).Related paritytech/cumulus#1423.
cumulus companion: paritytech/cumulus#1711