-
Notifications
You must be signed in to change notification settings - Fork 721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] - dumping ledger state takes humungous amounts of memory #3691
Comments
This is being asked by the community for a long time, It would be best to be able to query specifics from the ledger state dump, eg. only go snapshot? I have heard of some technical limitations with it. But we see that we need to have the specific queries sooner, this dump memory consumption is sky rocketing already |
Closing this. If this is still relevant please reopen. |
It is still the same issue, did I miss any PR that updates this? |
@Jimbo4350 Please re-open this one. It's still a major issue. |
@newhoggy do any of your open PRs address this issue? If they do link the PR here please. |
Just a nice FYI... the ledger state dump in 1.35.4 on mainnet just went over the size that can be held in a normal integer If you're parsing the ledger state cbor this way, please check your code. I'm not sure how much time they're going to give us before 1.35.4 is pushed out as a hard-fork requirement. |
I suspect this won't be specific to 1.35.4 (the state might be a little smaller in 1.35.3)? If you're using |
@kevinhammond The state is slightly smaller in 1.35.3 as it hasn't broken there (yet). Given that we're actively upgrading to 1.35.4 on mainnet, I'll have to implement a workaround soon. Right now, the solution is to implement arrays of arrays in the Google cbor library I'm using. It's painful, but it's the only option I have for now. unsigned indexes aren't allowed in JVM languages. We really do need piecemeal queries for all this stuff. I believe db-sync is still using this monolithic ledger state dump as well. |
1.35.3 is currently over the 2G limit when I tested yesterday.
python cbor2 library still parses it just fine. |
Is querying the entire I understand that the In which case, please track this issue: #4140 |
I ran this on |
We also require querying stakeGo, stakeMark from the ledger state. Note that the full stakeMark, stakeGo snapshot, not only the stake amount per pool id. |
I think major use case for this was stake snapshot indeed. So until the complete equivalent ways to fetch this data from node are available, the downstream solutions that require those features will have to unfortunately depend on ledger-state (even if it's supposed to be used only for debugging) 🙂 |
@rdlrt Can you create new tickets, one for each of the queries that are needed to not rely on ledger-state anymore? |
@ashisherc does this meet your needs? #4279 |
@newhoggy thanks for the review, but that's not what I meant. As I mentioned in my previous comment, we rely on full stakeGo/stakeMark snapshot. which means not just pools info, but we also need |
@ashisherc @rdlrt @AndrewWestberg Can I get your input as users here, please --> #4982 |
Yes please. Dumping the ledger state is not a thing that can easily be optimised, so it's best if we create feature requests for the queries that return the parts of the ledger state that people need. |
I propose for this issue to be closed and new FRs be created to track each new query. |
Created new issue #4984 as requested, I did not split it further - but feel free to split them if desired |
It's epoch 464 and the cbor binary version of the ledger state I need to parse each epoch has reached 2.26GB
|
Internal
Internal if an IOHK staff member.
Area
Other Any other topic (Delegation, Ranking, ...).
Summary
Dumping ledger state on macOS takes humungous amount of memory.
Steps to reproduce
On a mac, start a cardano-node instance, then use cardano-cli to dump the ledger state on mainnet.
Observer cardano-node taking ~12G or memory, and cardano-cli another ~30G.
Expected behavior
Hopefully stay within available system memory.
The text was updated successfully, but these errors were encountered: