-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spacebank: vat storage rental economics #2631
Comments
@dtribble pointed out that we should add the size of the vat's c-list to the rent, so retaining references to off-vat things has a non-zero cost |
@phoddie I thought perhaps snapshot / restore would reclaim garbage space but a small experiment suggests not. Is that by design? @warner you mentioned that the way XS doesn't free memory until the xsMachine goes away might have billing implications. That reminded me...
In a simple test, it doesn't reclaim (much) memory:
-- https://gist.github.com/dckc/07db03398ca408e457657aeba8773d54 |
Creating a snapshot does not force a garbage collection. Doing so would be observable ( Garbage collection creates free space within the allocated XS heaps. It does not resize the XS heaps themselves. Consequently, the amount of host memory ( |
I guess I had in mind defragmentation of XS data structures so that less OS heap was used, not JavaScript level gc. In any case... another idea... Could @warner if the contract programmer let go of a bunch of objects but XS doesn't return the space to the OS, the validator still has to devote RAM to the contract. Is this a problem? |
Sure. To get an idea of what is possible there, try a couple things:
More-or-less the information shown in those two places could be propagated out to a script. Once we have an understanding of what you would find useful, we can think about how to provide it. |
Dunno if I should butt in or not. Here goes though: Feel free to mine this for ideas. |
@FUDCo and I were talking today about the deterministic handling of space-usage metering. We're concerned about how the apparent memory usage of a vat could vary due to things beyond the declared deterministic inputs. These days, we're trying to be tolerant of organic (non-forced) GC, and exclude the consequences of GC from metering. We're going to have an explicit Our current thought is:
The The The actual number of used slots will vary: when e.g. an object is created, and it attempts to pull a slot from the free list, it might claim an existing one, or it might find none and trigger organic GC, allowing it to reuse a reclaimed one. The memory usage reported by The If we establish a hard ceiling on We also establish a hard ceiling on The colorful analogy that @FUDCo and I came up with was:
The vat owner might want to trigger a new measurement, to bring their |
A couple of additional thoughts:
|
Incidentally, I found that XS tracks the number of "slots" in use, in
In this approach, the reported heap count would not include the space consumed by the last bit of (liveslots) code execution, which is probably better than reporting the stable value (immediately after GC before JS gets to run again) plus some extra number of allocations that depend precisely upon when the value gets sampled. To implement this, we'd want to modify |
combine approaches a little bit? have a flag on messages delivered to the worker process that says "I want |
|
Yeah, those are great questions.
As you say, The metering results are not part of consensus by default: we copy specific values out and use them in consensus-critical ways, but extra values are ignored (we were ignoring |
I just ran across an interesting precedent in Storage Rent Economics | Solana Docs:
|
What is the Problem Being Solved?
Our metering plan includes counting allocations during a crank, and limiting them in the same way as (and perhaps sharing units with) computation. Code which doesn't take up much CPU, but does allocate a lot of memory, should be terminated before it can impact other vats/cranks.
But this doesn't account for the long-term cost of storing a vat's static state. Each validator must spend disk on a copy of the vat state, for as long as that vat is active. There are also some (larger?) number of follower/archiving nodes which spend the disk but who don't receive any of the execution fees (staking rewards). To enable efficient allocation of this relatively scarce resource, we'd like to subject it to a market mechanism.
Ethereum simulates this by charging the transaction that increased storage needs a signifcant amount of "gas": 20000 gas per non-zero SSTORE word (256 bits). At current average gas prices (maybe 100 Gwei/gas, and about$1600/ETH, so 160 u$ /gas) this costs about $0.10 per byte stored. To capture the idea that freeing storage will reduce external costs, doing an SSTORE that makes the target become zero (compressed away by run-length encoding) will refund 15000 gas. This is a local refund: it reduces the gas consumption of the current transaction, but not past zero, and thus cannot actually increase the caller's balance. This feature is (ab)used by various "gas storage contracts" which fill storage with junk data when gas prices are low. Later, when a customer wants to execute an expensive transaction, they pay the storage contract to delete data within the same txn, and take advantage of the refund to fund their primary operation. While that's both a clever hack and a horrible pathlogy, it doesn't capture the time cost of the intermediate storage.
We haven't put a lot of thought into this yet, but we'll need something, and it clearly calls out for a descendant of the KeyKOS "Space Bank" metering mechanism. Vats could be charged some number of tokens per
byte * second
(or byte-cranks or byte-blocks, some notion of spacetime). At the end of each crank, when we force GC (to promptly release unused objects), we can ask the XS engine for the size of the remaining heap, in bytes. We then nominally deduct that value from some counter each unit of "time".Three big questions that come out of this: who pays for it, when, and what happens when the counter underflows?
In one sense, the crank that caused the heap to grow should be responsible for the costs. But that crank doesn't know how long the space will remain allocated. Pre-paying for a subscription only makes sense when the payer is also benefiting from the subscription, and the crank which consumed the space might be operating on a different budget than the future one that benefits from it.
So it may be easier to think about if each vat is associated with a meter. Various parties could top up the meter, but each (nominal) block, the meter is decremented by the currently-used space. If the meter reaches zero, a strict interpretation would suggest the vat ought to be terminated immediately: no rent, no service, boom. In practice, this could be pretty traumatic, so it might work better to suspend operation of the vat (no deliveries) until someone refills the meter, and only delete the vat entirely if it remains unpaid for a significant amount of time.
An obvious optimization would be to only record the last-decremented-at time, and the last-measured heap space. Then, when a message is about to be delivered to the vat, multiply the two and decrement the meter, update the last-decremented-at time, and possibly suspend the delivery if it underflows. Once a month, trigger a sweep of all vats, terminate the ones that were underflowed last month, and update all their meters. We need to catch both active vats that are receiving messages and idle ones which have not been touched for a while.
Or, if the rates are low enough (and disk space is cheap enough that we can afford some coarse accounting), we don't do anything special per-crank. We just do the full sweep once a month, suspending or terminating vats when the "rent is due" and not in between.
cc @btulloh @dtribble @erights
The text was updated successfully, but these errors were encountered: