-
Notifications
You must be signed in to change notification settings - Fork 659
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit size of transactions included in ChunkStateWitness #11103
Comments
From what I read it seems that a single Because of that I think that we should introduce a hard per-transaction limit, which could be set to 4MB. We can't make the limit much smaller than that because there are already contracts which are larger than 2MB, so the limit would have to be at least 2MB to allow deploying such contracts. 4MB is in line with the previous limitation. |
It's a bit sad that pub struct ChunkStateWitness {
/// The transactions to apply. These must be in the correct order in which
/// they are to be applied.
pub transactions: Vec<SignedTransaction>,
/// Finally, we need to be able to verify that the new transitions proposed
/// by the chunk (that this witness is for) are valid. For that, we need
/// the transactions as well as another partial storage (based on the
/// pre-state-root of this chunk) in order to verify that the sender
/// accounts have appropriate balances, access keys, nonces, etc.
pub new_transactions: Vec<SignedTransaction>,
pub new_transactions_validation_state: PartialState,
} If there was one 4MB transaction in the previous chunk and one 4MB transaction in the current one, that'll cause the witness to be 8MB, and that's before the storage proof which could also be as large as 7MB :/ AFAIR this was done because it was easier to implement transaction processing this way. We could consider modifying this so that every transaction is included only once, that'd give us more breathing room. |
Add metrics to measure the size of: * `ChunkStateWitness::main_state_transition` * `ChunkStateWitness::new_transactions` * `ChunkStateWitness::new_transactions_validation_state` * `ChunkStateWitness::source_receipt_proofs` The new metrics look like this in Grafana: ![image](https://github.com/near/nearcore/assets/149345204/730d7121-d41b-4e85-ac97-e3da6235c27c) They can be viewed here: https://nearone.grafana.net/d/edbl9ztm5h1q8b/stateless-validation?orgId=1&from=1714476360000&to=1714476660000&var-chain_id=mainnet&var-shard_id=All&var-node_id=jan-mainnet-node Those metrics will help to estimate the proper limits for various fields of `ChunKStateWitness` Refs: #11103
To deal with transaction doubling I'd propose to enforce a limit on the sum of all transaction fields:
One worry is that the previous chunk producer could prevent the next one from adding transactions by taking up the whole allocated 5MB, leaving zero space for The "proper" way to deal with this would be to include transactions once, but that'd probably require heavy refactoring. |
I collected some metrics to see how big all these fields of
This means that:
I'm a bit surprised that the storage proof is this big, afaik validating a transactions requires only reading the account and access key, which are rather small o-0. I guess things add up. |
Adding a simple If it turns out that |
Idea from @Longarithm: We could apply the receipts first, and then include as many transactions as'll fit under the limit. The big transactions would be included in chunks that don't have a lot of receipts, we have those every now and then. |
Idea from @bowenwang1996 We could lower the size limit for a single transaction. Disallow contracts larger than e.g 1MB. |
From my discussion with @jancionear: There is check of max args length of function call. Another concern - we can get unlucky and receive large txs/receipts from all shard ids simultaneously, so we'd have to multiply worst case by 6, currently. |
Collected some data: In the current state there are ~170 accounts with contracts larger than 1MB, the largest ones reach 4MB: To support those contracts we'd have to support 4MB Applying compression helps, only 6 accounts have contracts with compressed size larger than 1MB:
The only problem is that it'd make I also scanned the last year of blockchain history to find the largest transactions: top-transactions-1year.txt |
zulip discussion about lowering the size limit: https://near.zulipchat.com/#narrow/stream/295306-contract-runtime/topic/Lowering.20the.20limit.20for.20contract.20code.20size |
The size limit for a single single transaction used to be 4MiB, this PR reduces it to 1.5MiB. Transactions larger than 1.5MiB will be rejected. This is done to help with near#11103. It's hard to limit the size of `ChunkStateWitness` when a single transaction can be as large as 4MiB. Having 1.5MiB transactions is much more manageable. This will break some transactions. The current mainnet traffic contains a transaction larger than 1.5MiB approximately every 2 days (~150 txs per year). We've decided that it's okay to break those transactions. The new limit is introduced in protocol version `68`. This protocol version could be released before the full stateless validation launch, to see if anyone complains. Note that this change doesn't limit the size of receipts, only transactions. It's still possible to create a receipt with a 4MiB contract or function call, but that's out of scope for the transaction size limit. Zulip discussion about lowering the limit: https://near.zulipchat.com/#narrow/stream/295306-contract-runtime/topic/.E2.9C.94.20Lowering.20the.20limit.20for.20contract.20code.20size
The size limit for a single single transaction used to be 4MiB, this PR reduces it to 1.5MiB. Transactions larger than 1.5MiB will be rejected. This is done to help with near#11103. It's hard to limit the size of `ChunkStateWitness` when a single transaction can be as large as 4MiB. Having 1.5MiB transactions makes things much more manageable. This will break some transactions. On current mainnet there is approximately one transaction larger than 1.5MiB per day (~420 large txs per year). We've decided that it's okay to break those transactions. The new limit is introduced in protocol version `68`. This protocol version could be released before the full stateless validation launch, to see if anyone complains. Note that this change doesn't limit the size of receipts, only transactions. It's still possible to create a receipt with a 4MiB contract or function call, but that's out of scope for the transaction size limit. Zulip discussion about lowering the limit: https://near.zulipchat.com/#narrow/stream/295306-contract-runtime/topic/.E2.9C.94.20Lowering.20the.20limit.20for.20contract.20code.20size
The size limit for a single single transaction used to be 4MiB, this PR reduces it to 1.5MiB. Transactions larger than 1.5MiB will be rejected. This is done to help with near#11103. It's hard to limit the size of `ChunkStateWitness` when a single transaction can be as large as 4MiB. Having 1.5MiB transactions makes things much more manageable. This will break some transactions. On current mainnet there is approximately one transaction larger than 1.5MiB per day (~420 large txs per year). We've decided that it's okay to break those transactions. The new limit is introduced in protocol version `68`. This protocol version could be released before the full stateless validation launch, to see if anyone complains. Note that this change doesn't limit the size of receipts, only transactions. It's still possible to create a receipt with a 4MiB contract or function call, but that's out of scope for the transaction size limit. Zulip discussion about lowering the limit: https://near.zulipchat.com/#narrow/stream/295306-contract-runtime/topic/.E2.9C.94.20Lowering.20the.20limit.20for.20contract.20code.20size
…1406) Fixes: #11103 This PR adds 3 new limitations to control total size of transactions included in ChunkStateWitness 1) Reduce `max_transaction_size` from 4MiB to 1.5MiB. Transactions larger than 1.5MiB will be rejected. 2) Limit size of transactions included in the last two chunks to 2MiB. `ChunkStateWitness` contains transactions from both the current and the previous chunk, so we have to limit the sum of transactions from both of those chunks. 3) Limit size of storage proof generated during transaction validation to 500kiB (soft limit). In total that limits the size transaction related fields to 2.5MiB. About 1): Having 4MiB transactions is troublesome, because it means that we have to allow at least 4MiB of transactions to be included in `ChunkStateWitness`. Having so much space taken up by the transactions could cause problems with witness size. See #11379 for more information. About 2): `ChunkStateWitness` contains both transactions from the previous chunk (`transactions`) and the new chunk (`new_transactions`). This is annoying because it halves the space that we can reserve for transactions. To make sure that the size stays reasonable we limit the sum of both those fields to 2MiB. On current mainnet traffic the sum of these fields stays under 400kiB, so 2MiB should be more than enough. This limit has to be slightly higher than the limit for a single transaction, so we can't make it 1MiB, it has to be at least 1.5MiB. About 3): On mainnet traffic the size of transactions storage proof is under 500kiB on most chunks, so adding this limit shouldn't affect the throughput. I assume that every transactions generates a limited amount of storage proof during validation, so we can have a soft limit for the total size of storage proof. Implementing a hard limit would be difficult because it's hard to predict how much storage proof will be generated by validating a transaction. Transactions are validated by running `prepare_transactions` on the validator, so there's no need for separate validation code.
ChunkStateWitness
contains transactions which were included in the last two chunks:We must ensure that the size of these fields isn't too big. A malicious chunk producer could potentially try to construct a
ChunkStateWitness
with so many transactions that the size of the witness would explode. Large witnesses are known to cause problems, so we must implement some sort of limit.Refs: #10259 (comment)
It's a security issue, so I'd say that it's necessary for stateless validation MVP.
The text was updated successfully, but these errors were encountered: