You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Measure which trie keys are the biggest contributors to the state witness size.
Basically we need metric (trie key first byte, shard id) -> u64. The easy way to measure it is to add (2 * key.len() + value.len()) at each Trie::get (2 because of nibbles). Alternatively we could add how much bytes each new key adds to the current state witness size, but the first way already looks like a good approximation. This can help to see why we often hit state witness size limit: https://nearone.grafana.net/goto/FaAFtMUIg?orgId=1
In the good case, if this is because of contract codes, we should do #11099 after that. Otherwise we should look for other ways to improve throughput.
Add metric measuring how many bytes is contributed by state items (2 * key.len() + value.len()) vs. inner trie items (everything else - basically, sizes of all CryptoHashes). This gives the merkle trie overhead.
Basically we need metric (trie key first byte, shard id) -> u64. The easy way to measure it is to add (2 * key.len() + value.len()) at each
Trie::get
(2 because of nibbles). Alternatively we could add how much bytes each new key adds to the current state witness size, but the first way already looks like a good approximation. This can help to see why we often hit state witness size limit: https://nearone.grafana.net/goto/FaAFtMUIg?orgId=1In the good case, if this is because of contract codes, we should do #11099 after that. Otherwise we should look for other ways to improve throughput.
Zulip
The text was updated successfully, but these errors were encountered: