Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Storage Items Skipped #599

Closed
Dinonard opened this issue Dec 20, 2023 · 4 comments · Fixed by #641
Closed

Storage Items Skipped #599

Dinonard opened this issue Dec 20, 2023 · 4 comments · Fixed by #641

Comments

@Dinonard
Copy link
Contributor

I was testing an extrinsic which executes some storage migration, and noticed that lots of storage items seem to be ignored.
The main logic of the call looks like this:

match OldLedger::<T>::drain().next() {
                ... // do something here

OldLedger is just a simple storage map, and the above snippet is called in a loop - so many entries are deleted/migrated per call.

When I test this using the try-runtime, there are around 21k entries.
But when I tested it with chopsticks, I noticed that the migration is done using way too few extrinsic calls,
because it seems to omit lots of items. Only a few hundred entries got migrated. And it seems to be different each time I redo the migration.

It might be that this is some known shortcoming, but I checked the existing issues and didn't notice anything covering this.

@xlc
Copy link
Member

xlc commented Dec 22, 2023

would you be able to produce a wasm or something so we can reproduce the issue?

@Dinonard
Copy link
Contributor Author

Of course!

  • Please use shibuya-118 from Astar's v5.28.0 release
  • Shibuya has been upgraded yesterday, so I'd suggest to fetch the state from block the 5335600
  • use the extrinsic dappStakingMigration -> migrate(None)

When running the migration yesterday, it took hours for all the steps to complete, with roughly 21k entries to migrate, and 130k to delete. But when using the fork, it can take only a few calls to complete everything.

@ermalkaleci
Copy link
Collaborator

This is a limitation of chopsticks, in this scenario you can't drain more than 1000 keys. This is a low priority issue because you really shouldn't use chopsticks to drain your entire storage because it will take forever. What you should do is clearPrefix OldLedger and insert some entries for testing.

- Members

@Dinonard
Copy link
Contributor Author

Thanks for the explanation and the link, I'll keep it mind for the future. 👍

The goal of my test was to let the migration script run for some time, just to confirm everything works as I expect.
I only raised the issue after seeing the discrepancy between try-runtime testing & amount of entries migrated/deleted.

Is the 1000 limit related to how many storage items can be read/written during a single call? Or is it related to a particular storage item (e.g. map)?

@ermalkaleci ermalkaleci linked a pull request Jan 23, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants