Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flat State Gas Costs (Read FS only) #8006

Closed
6 tasks done
Tracked by #8550 ...
jakmeier opened this issue Nov 7, 2022 · 16 comments
Closed
6 tasks done
Tracked by #8550 ...

Flat State Gas Costs (Read FS only) #8006

jakmeier opened this issue Nov 7, 2022 · 16 comments
Assignees
Labels
A-params-estimator Area: runtime params estimator A-storage Area: storage and databases T-contract-runtime Team: issues relevant to the contract runtime team

Comments

@jakmeier
Copy link
Contributor

jakmeier commented Nov 7, 2022

Storage operations will need new gas parameters post flat state.
We have to evaluate its performance from several perspectives to set them.

This issue description succinctly summarizes the results in one location and is edited with the newest results as the data is collected.
Detailed data and discussion goes into comments.

What is the theoretical performance expected from FS?

  • DONE: 2 * DISK_LATENCY (DISK_LATENCY could be anywhere between 50 Ggas and 1100 Ggas)

We expect one FlatState and one State lookup for every read. Assuming 100us per DB access, this results in 0.2 Tgas total cost. However, rpc nodes and especially archival nodes are significantly slower for the State column. According to that we should rather go with 2 * 1100us. But if we keep relying on good trie node caches, as we do today, then 0.2 Tgas total cost on average is still the expectation.

What is the observed latency for DBCol::FlatState reads and how does it compare to DBCol::State?

  • DONE: DBCol::FlatState is faster than state, sitting around 50us +/- 20us

On RPC nodes, we see about 20 times faster latencies on average, and 4 times faster on the 99th percentile (= uncached reads).
In absolute terms, the average is below 100us but the 99th percentile is above 1000us. The block cache really matters here.

For archival nodes, FS is basically the same. Compared to State, it between 50 times and100 times faster on average and on the 99th percentile, both testnet and mainnet.

Absolute latencies are around 50us, +/- 20us.

What are additional DB costs of FS, aside from the DBCol::FlatState lookup?

  • DONE: No DB overheads to be worried about

GOOD: In normal operation mode, even with high load and cold caches, all "constant" overhead seems to be small enough.

OPEN QUESTION: How bad is the overhead when flat head lags behind? In terms of DB load, it doesn't change because we keep everything in memory. But we should measure the CPU overhead in the estimator framework.

When running in an isolated setup integrated in the runtime-params-estimator, what is the read latency?

  • DONE: The normal latency is small (<10us) and dominated by State reads, not FlatState reads.
  • DONE: But when we add 50 deltas, the time increases by 40us, which we should add to expected read costs.

In summary, what are the suggested new storage cost parameters?

  • DONE:
#parameter                                    Gas cost (old & new)            Compute Cost
wasm_touching_trie_node                       16_101_955_926                  110us (factor 6.875)
wasm_storage_write_base                       64_196_736_000                  200us (factor 3.12)
wasm_storage_remove_base                      53_473_030_500                  200us (factor 3.74)
wasm_storage_read_base                        56_356_845_750                  200us (factor 3.55)
wasm_storage_has_key_base                     54_039_896_625                  200us (factor 3.70)
@jakmeier jakmeier added A-storage Area: storage and databases A-params-estimator Area: runtime params estimator T-storage labels Nov 7, 2022
@jakmeier jakmeier self-assigned this Nov 7, 2022
@jakmeier
Copy link
Contributor Author

jakmeier commented Nov 7, 2022

Theory: 200 Ggas base cost

FS replaces the get_ref part of a normal lookup, which usually has to traverse the trie. The direct mapping stored in FlatState allows reading it in only one DB lookup. In case of forks, we might have to apply some deltas. But since we keep them in memory, we assume that the disk DB accesses are the dominant cost factor.

With this assumption, we expect one FlatState and one State lookup for every read. Assuming a persistent SSD that delivers 10k IOPS, we think a DB request on FlatState should be handled within 100us.

For State, we know that keys are not sorted and we also know that archival databases can have huge amounts of data in this column. In practice, requests can easily take 1-2ms. But note that this is already the case today with a base cost that corresponds to only 56us + TTN cost of 16us. And still archival nodes can keep up. This is because most reads never go to the database but are cached instead.

If we assume to keep the same caching behavior as today, we can therefore safely replace the old cost for reading the final value (56us + 16us) with a new cost of 100us.

For those within Pagoda, additional analysis on this cost for Sweatcoin traffic specifically can be found in this private document. (If someone outside Pagoda requires access, ping me on Zulip.)

@jakmeier
Copy link
Contributor Author

jakmeier commented Nov 7, 2022

Practical observations: In Progress

DB column read latency

  RPC avg RPC under load RPC 99th percentile Archival avg Archival under load Archival 99th percentile
mainnet / State *1100us *1310us *4630us **4'500us **5'500us **95'000us
mainnet / FlatState *78us *72.2us *1290us **30us **40us **2'000us
testnet / State todo todo todo *2880us n/a *63'400us
testnet / FlatState todo todo todo *50us n/a *650us
max todo todo todo todo todo todo

get_ref latency

  RPC avg RPC under load RPC 99th percentile Archival avg Archival under load Archival 99th percentile
mainnet / State *204us *119us *5980us todo todo todo
mainnet / FlatState *50.6us *66.4us *1020us todo todo todo
testnet / State todo todo todo *286us n/a *3150us
testnet / FlatState todo todo todo *39us n/a *365us
max todo todo todo todo todo todo

* Measured in Noovmeber 2022
** Measured in February 2023

@jakmeier
Copy link
Contributor Author

Overhead of FlatState in normal operation

Regarding additional costs, aside from the DBCol::FlatState lookup, I investigated a few red herrings. But there are also concerns regarding FlatStateMisc and more importantly FlatStateDeltas.

I'll describe my finding, starting with the most important one.

FlatStateDeltas

Testing the Sweatcoin load with cold caches, I didn't observe ANY reads from FlatStateDeltas. That's good, we keep deltas in memory, so should only need to read them upon start.

Also good, I did see deltas being written and deleted.

# count of FlatStateDeltas accesses over ~15 mins
   8408 DELETE FlatStateDeltas
   4204 SET FlatStateDeltas

This looks like we set 1 deltas per chunk, and delete them twice. That seems odd, not sure why we delete each delta twice. I'll look into that, but for performance, it doesn't really matter.

The delta sizes were also not too bad. Average size was 7049.9 B.
The 5 largest delta sized observed observed were:

  • 156583B
  • 167160B
  • 167490B
  • 179276B
  • 191687B

FlatStateMisc

This column is accessed to read and update flat head. Over the period of ~15 minutes, I saw four keys being set to about 1k times. That's once per chunk, as expected.

# count of FlatStateMisc accesses and the keys over ~15 min
# grep FlatStateMisc feb27_mainnet_fsandtrie_withfscache.io_trace | sort | uniq --count
1051   SET FlatStateMisc "SEVBRAAAAAAAAAAA" size=32
1051   SET FlatStateMisc "SEVBRAEAAAAAAAAA" size=32
1051   SET FlatStateMisc "SEVBRAIAAAAAAAAA" size=32
1051   SET FlatStateMisc "SEVBRAMAAAAAAAAA" size=32

In terms of performance, this should be neglible.

Red herrings

  • The balance checker reading potentially postponed receipts generate a bunch of DB reads for nothing, that show up as FlatSate reads. These were also present for normal State, so it's not a degradation.
  • Overall the number of FlatState reads is higher than State when we look at mainnet traffic. That's due to Sweatcoin dominance and the huge trie cache we have. In a "normal" case, that would not be the case.

near-bulldozer bot pushed a commit that referenced this issue Feb 28, 2023
Based on @jakmeier' estimations, we need to cache `ValueRef`s for flat storage head (see #8006). RocksDB internal impl and block cache doesn't help, and we need to make flat storage performance to be at least comparable to trie performance in MVP, in order not to make undercharging issue worse.

This cache lives inside `FlatStorageState`, can be accessed in `get_ref` before attempt to read value ref from flat storage head, and must be updated when we apply delta.

I think it makes sense to make cache capacity configurable, and this config fits into `StoreConfig`. I don't like that is propagated to `FlatStorageState` from `ShardTries`, it makes trie storage and flat storage mixed even more. Perhaps it needs to be fully moved inside `FlatStateFactory`, but I am not sure.

## Testing

* extend `flat_storage_state_sanity` to check both cached and non-cached versions;
* `flat_storage_state_cache_eviction` to check that eviction strategy is applied correctly.
@Longarithm
Copy link
Member

Which branch do you use for testing?

Perhaps in the past I've recommended master...fs-0209-base-test as most configurable. Though we don't need it anymore because we are not going to store items in deltas separately. Anyway, there we use delete_range for delta removals - how it is reflected in DB traces? I doubt that it splits into single delete operations.

@jakmeier
Copy link
Contributor Author

I was using master from last Friday with just the FS cache added. Basically what we have in master now plus #8540, unless I missed some FS PRs since then.

Do you have important changes on branches outside of master?

@Longarithm
Copy link
Member

No, master with cache is great. Then I don't see why there are exactly 2x deletions, they are triggered only once in store_helper::remove_delta and logic is more or less clear there.

@walnut-the-cat walnut-the-cat added T-contract-runtime Team: issues relevant to the contract runtime team and removed T-storage labels Mar 13, 2023
@jakmeier
Copy link
Contributor Author

Finishing this issue and providing a final proposal for how expensive the new storage requests should be is my current focus.

  • We already have all the theoretical data and the latencies from mainnet & testnet nodes.
  • The numbers so far look like a 0% trie node cache hit rate as the worst-case is not a feasible assumption. (It would require a compute cost of >1Tgas for a single read or write.) But with a moderately conservative assumption we can probably find a good middle path. But before I do that, I want to do the last step:
  • We still need to measure overhead of looking up many deltas in case of lagging finality.

Expect a (final) cost proposal from my side by tomorrow, or at the latest, the day after.

@jakmeier
Copy link
Contributor Author

I'm falling behind on this a bit, sorry.

The good news is, I already have the estimator integration, I just need to run it on the proper machine and summarize the results. If tomorrow I find slightly more time in between meetings than today, then I will have the proposed costs ready tomorrow.

@jakmeier
Copy link
Contributor Author

Using the estimator, I evaluated the overhead of in-memory delta processing. Here are the measurements:

#deltas #keys per delta StorageReadBase [us] StorageHasKeyBase [us] StorageWriteBase [us]
0   8.358 7.068 9.21
10 50 17.559 16.574 9.279
50 50 46.952 46.642 9.58
100 50 88.91 89.029 9.944
1000 50 810.665 810.009 8.643
10 1000 14.412 15.906 9.295
50 1000 50.056 46.662 8.906
100 1000 99.19 90.93 9.49
1000 1000 823.11 821.63 8.256

Looks like a very clean linear increase in read time for each delta added. With a relatively small effect from the number of keys per delta. And as expected, the write time does not change.

It's 0.8us per delta if we have 50 keys per delta. Or 0.81us per delta if we have 1000 keys per delta.

Our assumption for RAM usage was that 50 deltas is the maximum we support. Taking that for gas costs as well, we expect about a 40us increase per read for delta resolving in the worst case.

@jakmeier
Copy link
Contributor Author

jakmeier commented Mar 17, 2023

Cost options

We already decided that we want to keep gas parameters the same all around. But we need add higher compute costs to compensate.

I want to suggest two ways of looking at it. Either we just increase costs to cover the difference between a flat state and state lookup. Or, we set the costs to what we think they should be in absolute terms, just looking at DB latencies of an RPC node.

Option 1: Cover only the additional cost of FlatState

Here we just add the 50us FlatState latency to every read base cost to compensate for the TTN that is no longer charged for them.

#parameter                                    Gas cost (old & new)            Compute Cost
wasm_touching_trie_node                       16_101_955_926                  
wasm_storage_write_base                       64_196_736_000
wasm_storage_remove_base                      53_473_030_500
wasm_storage_read_base                        56_356_845_750                  106us (factor 1.88)
wasm_storage_has_key_base                     54_039_896_625                  104us (factor 1.92)

Option 2: Set the costs to true costs

Here I take the measured 1100us access time from RPC nodes.

For trie nodes, I would assume 90% hit rate, which is worse than the measurements but certainly a possibility with nasty access patterns. But for the final value reads (covered by base costs) I would assume no cache hit rate.

Difference between reads and writes become negligible here, since writes are not blocking on IO.

#parameter                                    Gas cost (old & new)            Compute Cost
wasm_touching_trie_node                       16_101_955_926                  110us (factor 6.875)
wasm_storage_write_base                       64_196_736_000                 1100us (factor 17.13)
wasm_storage_remove_base                      53_473_030_500                 1100us (factor 20.57)
wasm_storage_read_base                        56_356_845_750                 1100us (factor 19.518)
wasm_storage_has_key_base                     54_039_896_625                 1100us (factor 20.36)

But these factors are on the excessive side. Especially since we know that the network runs just fine with current costs.

Option 3: Balance the two approaches

I suggest we do the following:

  • take the TTN cost from option 2
  • bump all base costs to 200us (this covers all changes from option 1 and more)
#parameter                                    Gas cost (old & new)            Compute Cost
wasm_touching_trie_node                       16_101_955_926                  110us (factor 6.875)
wasm_storage_write_base                       64_196_736_000                  200us (factor 3.12)
wasm_storage_remove_base                      53_473_030_500                  200us (factor 3.74)
wasm_storage_read_base                        56_356_845_750                  200us (factor 3.55)
wasm_storage_has_key_base                     54_039_896_625                  200us (factor 3.70)

I must admit, the 200us number is a bit arbitrary. We discussed 10k IOPS as a useful baseline before, which would imply 100us base cost. But actual benchmarks show that 100us is still quite optimistic for RocksDB with large state on a network attached SSD. One could also argue for 400us based on those results.

But 400us reaches the limits of what we can do before we seriously harm current mainnet users. The throuhgput would be visibly limited compared to today and gas prices would potentially start to increase. Just as an exmple, a Sweatcoin batch has about 30% of the WASM cost in storage base costs. With 400us, we would have A factor of 7 on base costs which would mean 70% + 30% * 7 = 280% of current compute cost. That in turns means 357Tgas of their batches would fill an entire chunk and start causing congestion.

@aborg-dev
Copy link
Contributor

Thank you for the detailed evaluation, Jakob!
I agree with your assessment that bumping compute costs to 400us will start noticeably affecting Sweatcoin's workload on shard 3.
Something around Option 3 sounds like a good compromise that does not degrade the user experience but limits undercharging by bringing costs closer to the worst-case scenario.

@walnut-the-cat
Copy link
Contributor

Naive question. Are we trying to find the balance between 'not affecting contract' and 'minimizing our cost damage as much as possible'? It is not super clear to me what the baseline assumption we are having with 200us. (e.g. it's still not enough number to cover the actual usage)

@jakmeier
Copy link
Contributor Author

Yeah, that's the balance we are trying to find. But the problem is we don't know the true cost, or rather, the true cost is not well defined as a function of just the number of requests. One way to cover the "true full cost" would be the make worst case assumptions and go with 1100us. But that's too much damage to users.

As I wrote in my previous comment, 200us does not have any baseline assumptions, it's a rather arbitrary point in the range of possible values we could consider.

@jakmeier
Copy link
Contributor Author

Today at the flat storage engineering meeting, we decided to move forward with option 3 for computes costs.

I will make sure to update these on the NEP and set the compute costs in nearcore, in due time. But we have to finish compute costs work first.

@Longarithm
Copy link
Member

Should we close the issue?

@jakmeier
Copy link
Contributor Author

Yes makes sense, I will close it now.

jakmeier added a commit to jakmeier/nearcore that referenced this issue Apr 18, 2023
This is a protocol feature, updating the storage cost as agreed in near#8006
to allow flat storage to be deployed without undercharging risks.
jakmeier added a commit to jakmeier/nearcore that referenced this issue Apr 20, 2023
This is a protocol feature, updating the storage cost as agreed in near#8006
to allow flat storage to be deployed without undercharging risks.
near-bulldozer bot pushed a commit that referenced this issue Apr 21, 2023
This is a protocol feature, updating the storage cost as agreed in #8006
to allow flat storage to be deployed without undercharging risks.
near-bulldozer bot pushed a commit that referenced this issue Apr 21, 2023
The code is literally removing `protocol_feature_flat_state` and moving feature to stable protocol. We also disable `test_state_sync` as this is part of refactor we can do in Q2.

## Feature to stabilize

Here we stabilize Flat Storage for reads, which means that all state reads in the client during block processing will query flat storage instead of Trie. Flat Storage is another index for blockchain state, reducing number of DB accesses for state read from `2 * key.len()` in the worst case to 2.

This will trigger background creation of flat storage, using 8 threads and finishing in 15h for RPC node and 2d for archival node. After that all non-contract reads will go through flat storage, for which special "chunk views" will be created. When protocol upgrade happens, contracts reads will go through flat storage as well. Also compute costs will change as Option 3 suggests [here](#8006 (comment)). It is to be merged separately, but we need to ensure that both costs change and flat storage go into next release together.

## Context

Find more details in:
- Overview: https://near.github.io/nearcore/architecture/storage/flat_storage.html
- Approved NEP: https://github.com/near/NEPs/blob/master/neps/nep-0339.md
- Tracking issue: #7327

## Testing and QA

* Flat storage structs are covered by unit tests;
* Integration tests check that chain behaviour is preserved and costs are changed as expected;
* Flat storage spent ~2 months in betanet with assertion that flat and trie `ValueRef`s are the same;
* We were running testnet/mainnet nodes for ~2 months with the same assertion. We checked that performance is not degraded, see e.g. https://nearinc.grafana.net/d/Vg9SREA4k/flat-storage-test?orgId=1&var-chain_id=mainnet&var-node_id=logunov-mainnet-fs-1&from=1677804289279&to=1678088806154 checking that even with finality lag of 50 blocks performance is not impacted. Small exception is that we updated data layout several times during development, but we checked that results are unchanged.

## Checklist
- [x] Include compute costs after they are merged - #8924
- [x] https://nayduck.near.org/#/run/2916
- [x] Update CHANGELOG.md to include this protocol feature in the `Unreleased` section.
nikurt pushed a commit that referenced this issue Apr 25, 2023
This is a protocol feature, updating the storage cost as agreed in #8006
to allow flat storage to be deployed without undercharging risks.
nikurt pushed a commit that referenced this issue Apr 25, 2023
The code is literally removing `protocol_feature_flat_state` and moving feature to stable protocol. We also disable `test_state_sync` as this is part of refactor we can do in Q2.

## Feature to stabilize

Here we stabilize Flat Storage for reads, which means that all state reads in the client during block processing will query flat storage instead of Trie. Flat Storage is another index for blockchain state, reducing number of DB accesses for state read from `2 * key.len()` in the worst case to 2.

This will trigger background creation of flat storage, using 8 threads and finishing in 15h for RPC node and 2d for archival node. After that all non-contract reads will go through flat storage, for which special "chunk views" will be created. When protocol upgrade happens, contracts reads will go through flat storage as well. Also compute costs will change as Option 3 suggests [here](#8006 (comment)). It is to be merged separately, but we need to ensure that both costs change and flat storage go into next release together.

## Context

Find more details in:
- Overview: https://near.github.io/nearcore/architecture/storage/flat_storage.html
- Approved NEP: https://github.com/near/NEPs/blob/master/neps/nep-0339.md
- Tracking issue: #7327

## Testing and QA

* Flat storage structs are covered by unit tests;
* Integration tests check that chain behaviour is preserved and costs are changed as expected;
* Flat storage spent ~2 months in betanet with assertion that flat and trie `ValueRef`s are the same;
* We were running testnet/mainnet nodes for ~2 months with the same assertion. We checked that performance is not degraded, see e.g. https://nearinc.grafana.net/d/Vg9SREA4k/flat-storage-test?orgId=1&var-chain_id=mainnet&var-node_id=logunov-mainnet-fs-1&from=1677804289279&to=1678088806154 checking that even with finality lag of 50 blocks performance is not impacted. Small exception is that we updated data layout several times during development, but we checked that results are unchanged.

## Checklist
- [x] Include compute costs after they are merged - #8924
- [x] https://nayduck.near.org/#/run/2916
- [x] Update CHANGELOG.md to include this protocol feature in the `Unreleased` section.
nikurt pushed a commit that referenced this issue Apr 25, 2023
The code is literally removing `protocol_feature_flat_state` and moving feature to stable protocol. We also disable `test_state_sync` as this is part of refactor we can do in Q2.

## Feature to stabilize

Here we stabilize Flat Storage for reads, which means that all state reads in the client during block processing will query flat storage instead of Trie. Flat Storage is another index for blockchain state, reducing number of DB accesses for state read from `2 * key.len()` in the worst case to 2.

This will trigger background creation of flat storage, using 8 threads and finishing in 15h for RPC node and 2d for archival node. After that all non-contract reads will go through flat storage, for which special "chunk views" will be created. When protocol upgrade happens, contracts reads will go through flat storage as well. Also compute costs will change as Option 3 suggests [here](#8006 (comment)). It is to be merged separately, but we need to ensure that both costs change and flat storage go into next release together.

## Context

Find more details in:
- Overview: https://near.github.io/nearcore/architecture/storage/flat_storage.html
- Approved NEP: https://github.com/near/NEPs/blob/master/neps/nep-0339.md
- Tracking issue: #7327

## Testing and QA

* Flat storage structs are covered by unit tests;
* Integration tests check that chain behaviour is preserved and costs are changed as expected;
* Flat storage spent ~2 months in betanet with assertion that flat and trie `ValueRef`s are the same;
* We were running testnet/mainnet nodes for ~2 months with the same assertion. We checked that performance is not degraded, see e.g. https://nearinc.grafana.net/d/Vg9SREA4k/flat-storage-test?orgId=1&var-chain_id=mainnet&var-node_id=logunov-mainnet-fs-1&from=1677804289279&to=1678088806154 checking that even with finality lag of 50 blocks performance is not impacted. Small exception is that we updated data layout several times during development, but we checked that results are unchanged.

## Checklist
- [x] Include compute costs after they are merged - #8924
- [x] https://nayduck.near.org/#/run/2916
- [x] Update CHANGELOG.md to include this protocol feature in the `Unreleased` section.
nikurt pushed a commit that referenced this issue Apr 28, 2023
This is a protocol feature, updating the storage cost as agreed in #8006
to allow flat storage to be deployed without undercharging risks.
nikurt pushed a commit that referenced this issue Apr 28, 2023
The code is literally removing `protocol_feature_flat_state` and moving feature to stable protocol. We also disable `test_state_sync` as this is part of refactor we can do in Q2.

## Feature to stabilize

Here we stabilize Flat Storage for reads, which means that all state reads in the client during block processing will query flat storage instead of Trie. Flat Storage is another index for blockchain state, reducing number of DB accesses for state read from `2 * key.len()` in the worst case to 2.

This will trigger background creation of flat storage, using 8 threads and finishing in 15h for RPC node and 2d for archival node. After that all non-contract reads will go through flat storage, for which special "chunk views" will be created. When protocol upgrade happens, contracts reads will go through flat storage as well. Also compute costs will change as Option 3 suggests [here](#8006 (comment)). It is to be merged separately, but we need to ensure that both costs change and flat storage go into next release together.

## Context

Find more details in:
- Overview: https://near.github.io/nearcore/architecture/storage/flat_storage.html
- Approved NEP: https://github.com/near/NEPs/blob/master/neps/nep-0339.md
- Tracking issue: #7327

## Testing and QA

* Flat storage structs are covered by unit tests;
* Integration tests check that chain behaviour is preserved and costs are changed as expected;
* Flat storage spent ~2 months in betanet with assertion that flat and trie `ValueRef`s are the same;
* We were running testnet/mainnet nodes for ~2 months with the same assertion. We checked that performance is not degraded, see e.g. https://nearinc.grafana.net/d/Vg9SREA4k/flat-storage-test?orgId=1&var-chain_id=mainnet&var-node_id=logunov-mainnet-fs-1&from=1677804289279&to=1678088806154 checking that even with finality lag of 50 blocks performance is not impacted. Small exception is that we updated data layout several times during development, but we checked that results are unchanged.

## Checklist
- [x] Include compute costs after they are merged - #8924
- [x] https://nayduck.near.org/#/run/2916
- [x] Update CHANGELOG.md to include this protocol feature in the `Unreleased` section.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-params-estimator Area: runtime params estimator A-storage Area: storage and databases T-contract-runtime Team: issues relevant to the contract runtime team
Projects
None yet
Development

No branches or pull requests

4 participants