Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Low Connectivity on Kusama #3877

Closed
eskimor opened this issue Sep 17, 2021 · 1 comment
Closed

Low Connectivity on Kusama #3877

eskimor opened this issue Sep 17, 2021 · 1 comment
Assignees

Comments

@eskimor
Copy link
Member

eskimor commented Sep 17, 2021

Validators are only connected to approximately 400 out of 900 validators, which is not ideal for parachain consensus.

From our findings so far it does not look like it is an issue with disconnects, but rather an issue with the needed connections not getting established in the first place. Rate of connections is always at least as high as rate of disconnects.

We do have issues with validators not keeping up with notifications: https://grafana.parity-mgmt.parity.io/goto/v_CWBrSnk?orgId=1 polkadot_sub_libp2p_connections_closed_total{reason="sync-notifications-clogged"}, but the amount is nowhere near so it could explain the low connectivity. We also get disconnects due to keep alive timeouts: https://grafana.parity-mgmt.parity.io/goto/5EBSBrS7k?orgId=1 - which happens way more often, but still does not explain the issues we are seeing.

We also found that peers are reporting (target="sub-libp2p") lots of wrong external addresses for themselves, but according to Pierre that is just an artifact of uses of docker, in reality those addresses are filtered out and never used.

Also Pierre found that a lot of nodes seem to be only reachable via IPv6, which in fact could be an issue - this is currently being investigated, by means of paritytech/polkadot-sdk#964 .

@eskimor eskimor self-assigned this Sep 17, 2021
eskimor added a commit that referenced this issue Oct 7, 2021
This should resolve #3877

Previously we would keep the set up2date manually via add/remove. The
problem was the remove call (which got called after add every time we
update the connected peers) would take addresses, instead of just
`PeerId`s. Thus if a peer changed addresses, we would remove it
permanently from the set of reserved peers.

This PR fixes this, by not keeping book ourselves, but instead taking
advantage of a to be exposed `set_reserved_peers` function.
@eskimor
Copy link
Member Author

eskimor commented Nov 25, 2021

Seems to be resolved.

@eskimor eskimor closed this as completed Nov 25, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant