Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[stable 2409] Backport #5741 #6110

Closed
wants to merge 2 commits into from
Closed

[stable 2409] Backport #5741 #6110

wants to merge 2 commits into from

Conversation

niklasad1
Copy link
Member

Backport #5741

Close #5589

This PR makes it possible for `rpc_v2::Storage::query_iter_paginated` to
be "backpressured" which is achieved by having a channel where the
result is sent back and when this channel is "full" we pause the
iteration.

The chainHead_follow has an internal channel which doesn't represent the
actual connection and that is set to a very small number (16). Recall
that the JSON-RPC server has a dedicate buffer for each connection by
default of 64.

- Because `archive_storage` also depends on
`rpc_v2::Storage::query_iter_paginated` I had to tweak the method to
support limits as well. The reason is that archive_storage won't get
backpressured properly because it's not an subscription. (it would much
easier if it would be a subscription in rpc v2 spec because nothing
against querying huge amount storage keys)
- `query_iter_paginated` doesn't necessarily return the storage "in
order" such as
- `query_iter_paginated(vec![("key1", hash), ("key2", value)], ...)`
could return them in arbitrary order because it's wrapped in
FuturesUnordered but I could change that if we want to process it
inorder (it's slower)
- there is technically no limit on the number of storage queries in each
`chainHead_v1_storage call` rather than the rpc max message limit which
10MB and only allowed to max 16 calls `chainHead_v1_x` concurrently
(this should be fine)

- Iterate over 10 accounts on westend-dev -> ~2-3x faster
- Fetch 1024 storage values (i.e, not descedant values) -> ~50x faster
- Fetch 1024 descendant values -> ~500x faster

The reason for this is because as Josep explained in the issue is that
one is only allowed query five storage items per call and clients has
make lots of calls to drive it forward..

---------

Co-authored-by: command-bot <>
Co-authored-by: James Wilson <[email protected]>
@niklasad1 niklasad1 added the A3-backport Pull request is already reviewed well in another branch. label Oct 17, 2024
Copy link

This pull request is amending an existing release. Please proceed with extreme caution,
as to not impact downstream teams that rely on the stability of it. Some things to consider:

  • Backports are only for 'patch' or 'minor' changes. No 'major' or other breaking change.
  • Should be a legit fix for some bug, not adding tons of new features.
  • Must either be already audited or not need an audit.
Emergency Bypass

If you really need to bypass this check: add validate: false to each crate
in the Prdoc where a breaking change is introduced. This will release a new major
version of that crate and all its reverse dependencies and basically break the release.

@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: test-linux-stable 3/3
Logs: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7634670

}
}

impl<Client, Block, BE> ChainHeadStorage<Client, Block, BE>
where
Block: BlockT + 'static,
BE: Backend<Block> + 'static,
Client: StorageProvider<Block, BE> + 'static,
Client: StorageProvider<Block, BE> + Send + Sync + 'static,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

arrgh breaking change, normally this will just be the substrate client which is already send + sync but probably a blocker for backporting this then?

@niklasad1
Copy link
Member Author

Not possible to backport, breaking change closing

@niklasad1 niklasad1 closed this Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A3-backport Pull request is already reviewed well in another branch.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants