-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace accounts data size limits with economics: transactions pay for allocations, like compute #27029
Comments
Starting this off as an issue, but can change this to a proposal if it seems reasonable. Edit: Proposal has been created here: #27575 |
cc @jstarry @taozhu-chicago @aeyakovenko @ryoqun @CriesofCarrots @joncinque |
EDIT: I see now that we need additional fees here because there needs to be some cost for requested account data that isn't used. |
Like the idea to use same/similar market mechanism to handle fee (cost of computing) and account size (cost of storage). It is doable from implementation perspective. My intuition is safer to have up limits, perhaps a limit per block? The first question top of my head is "will/can this replace rents all together?" as you already mentioned above. Would like to hear more discussion on that too. |
Great!
Setting per-block limits that cause the whole block to fail was the main reason for writing up this issue. I address why I think limits are bad/insufficient in the "Failing the transaction or the block" section. If there's another way to do per-block limits, I'm all ears!
One nice fallout of rent now, is that all new accounts must be rent-exempt. So creating new accounts causes users to lockup SOL. I think we could eventually get to a point where rent collection is removed, but the rent-exempt balance/lockup part could be kept. (This assumes the fee-to-allocate part of this issue is adopted as well.) |
The general approach of a fee market makes a lot of sense. Regarding the minimum amount of space that may be allowed in a transaction, do we have statistics about current allocations in transactions? Sorry, I tried looking through and couldn't quickly find any metrics in the code. We'll need to tailor some "reasonable" default that can balance every transaction using it without blowing up validators. We'll probably need to follow the same path as with disabling rent-paying accounts, and talk to the biggest offenders about this change before rolling it out and get them to use the new compute budget instructions early. Over time, we can reduce slowly reduce that limit, since it'll probably be easy to game at first by sending many small alloc / realloc transactions. Hopefully the economics line up that transaction fees won't make that viable. You're right that we shouldn't special-case top-level callers to the system program. Even if we try to be clever about examining transactions, it's impossible to know how much a transaction will allocate, between CPIs to the system program and programs just calling This will completely change how storage economics work on-chain, for the better, since it will create two fees to allocate space: a one-time allocation fee, and also a locked up "fee" that the user can reclaim. |
No, not yet. I can add code to report that data so we'll have metrics. May take some time to have representative numbers (i.e. mnb) though.
I agree.
That's the goal! |
@brooksprumo do you mind creating a proposal PR with this issue's description? I'd like to comment on some details of the problem you've stated and I think a PR would be a better place to start threads to discuss the open questions as well. |
Will do! |
Problem
Tracking and setting a cap for accounts data size—on transactions, blocks, and in total—is tricky.
Failing the transaction or the block
There are benefits to having different caps on allocated accounts data size per-transaction, per-block, and in-total. But what happens in each case when the cap is exceeded? If the per-transaction limit is exceeded, just that transaction can be failed, which is good.
The per-block and in-total checks are different. There's not a single transaction within the block that can be marked as the one transaction that caused the limit to be exceeded, as multiple transactions could exceed the limit, and it would depend on how each validator executed the transactions within the block. The solution then is fail the whole block (see PR #26744 for this implementation). When a block is failed, the downsides are (1) fees are not collected, and (2) votes do not land. This would allow malicious actors to cause blocks to fail without paying fees[^1]. This is bad. Failing the block for exceeding the in-total limit is the same as well.
[^1] The malicious actors would still need sufficient SOL to create the transactions and get the leader to put the transactions into a block, but if the block later fails, those fees would not be deducted.
TPU vs TVU
Validation of the accounts data size limits obviously occurs in the TVU. The TPU should also have facilities so that it does not inadvertently create bad blocks that later are dropped, which then causes the leader to not receive any rewards for creating the block.
The TPU does not load programs/accounts, so transactions with CPI instructions that cause allocations are invisible to the TPU. Currently, the TPU inspects allocations if the instructions are
SystemInstruction
s as part of its cost model. For users submitting transactions with large allocations, this incentivizes them to create new programs that just wrap the System Program to circumvent the TPU's cost model.Without the TPU loading programs/accounts, how can it get sufficient insight into all the instructions in all the transactions to ensure it does not create a bad block that exceeds an accounts data size limit?
Transaction inclusion fairness
Even if the TPU could accurately track account data allocations for all transactions, a per-block cap could incentivize malicious actors to create low-cost transactions that use up the majority of the available accounts data allocation space per block, causing other legitimate transactions-that-allocate to not be included in the block.
Proposed Solution
Auction the block's accounts data allocation space similar to how compute is auctioned off. Transactions pay to allocate account data. Either a low limit is set by default (maybe 1K), which enables small allocations, or no allocations are allowed by default. Larger allocations must be paid for up-front, otherwise the transaction fails.
Open Questions
Pricing
How do we price accounts data allocations? Should it be a fixed price, or be open (on one side or both) to facilitate fee markets? I lean towards a minimum amount per byte, but no maximum. Clients would submit their transaction's allocation budget and fee, and the block producer can use that information when packing blocks.
Should the fee scale based on the current total accounts data size? IOW, should the per-byte fee increase as the total accounts data size increases?
Limits
Even with this in place, should there still be limits? Either on per-transaction, per-block, or in-total? My thought is to avoid hardcoded limits where possible. Maybe with the scaling approach mentioned above, all limits could be removed.
Rent
How does this interact with rent? Can rent be removed entirely? Since rent-paying accounts can no longer be created, validators do not end up earning any lamports for rent on new accounts[^1]. Adding this new allocation fee would be economically advantageous to validators.
[^1] Caveat: Pre-existing rent-paying accounts are grandfathered in, so some rent collection fees are still collected by validators. Eventually all these accounts with either be topped up to be rent-exempt, or drain to zero and go away. At that point, validators will cease collecting any rent payments.
The text was updated successfully, but these errors were encountered: