Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update benchmarks for storage related opcode #1239

Closed
xgreenx opened this issue Jul 4, 2023 · 6 comments · Fixed by #1408
Closed

Update benchmarks for storage related opcode #1239

xgreenx opened this issue Jul 4, 2023 · 6 comments · Fixed by #1408
Assignees

Comments

@xgreenx
Copy link
Collaborator

xgreenx commented Jul 4, 2023

The source: #1230 (review)

Based on the benchmarks from #1230, it is clear that SMT and database with many entries slowdowns storage interaction significantly.

We need to update our benchmarks for opcodes: SCWQ, SWW, SWWQ, TR, TRO, CALL, SMO, BURN, MINT.
These opcodes modify (insert or remove) state/balances and update SMT. The gas cost should include the worst scenario when the contract has 100_000_000 entries in the storage.

The opcodes like CALL, TR, and TRO updates the SMT only in the contract context, so we can optimize the gas charging. But it will make our gas cost model more complex, so it is okay for now to charge more than we need. We can always optimize it later.

@Voxelot
Copy link
Member

Voxelot commented Jul 4, 2023

The opcodes like CALL, TR, and TRO updates the SMT only in the contract context, so we can optimize the gas charging. But it will make our gas cost model more complex, so it is okay for now to charge more than we need. We can always optimize it later.

I think only TRO (in the external context) would be optimized here, since CALL and TR must always interact with contract balances.

@xgreenx
Copy link
Collaborator Author

xgreenx commented Jul 4, 2023

Yeah, but in some cases, we interact with the storage only once, in some two times=) We can optimize it to charge per interaction during execution instead of one time at the beginning

@Dentosal
Copy link
Member

Dentosal commented Oct 5, 2023

The gas cost should include the worst scenario when the contract has 100_000_000 entries in the storage.

@xgreenx Any ideas how to get a db with 100_000_000 contract entries quickly? A quick benchmark implies that it about six hours to fill the database, and that's assuming there's no slowdown when there are more entries which is unlikely. Currently I'm just doing something like this:

fn fill_contract_storage(db: &mut VmDatabase, contract: &ContractId) -> io::Result<()> {
    for k in 0u64..100_000_000 {
        let mut key = Bytes32::zeroed();
        key.as_mut()[..8].copy_from_slice(&k.to_be_bytes());
        db.merkle_contract_state_insert(contract, &key, &key)?;
    }
    Ok(())
}

@xgreenx
Copy link
Collaborator Author

xgreenx commented Oct 5, 2023

Yeah, it is definitely the slowest way of doing that. You can find a Database::init_contract_state function that should do it much faster. For balances you can use Database::init_contract_balances.

If it still takes much time to setup, you can decrease this value to 10M or 1M. We can multiply the final value by a constant.

But we need to set up this contract one time and use it in all benchmarks. To remove impact from the RocksDB cache, we can use each iteration random storage key from a pre-defined seed.

@Dentosal
Copy link
Member

Dentosal commented Oct 6, 2023

Using init_contract_state it takes 6s to insert 1M values and 92s for 10M values. 100M values gets the process OOM killed on macOS, as I only have 16GiB of RAM.

I'm not sure if we can simply use 10M values and multiply, since the slowdown might not be linear. In any case, I'm going to change the benchmarks to use 10M values first.

@xgreenx
Copy link
Collaborator Author

xgreenx commented Oct 6, 2023

10M already is sufficient enough. We have a logarithmic complexity, so it definitely not multiply by 10=) Maybe 2 or 3, but yeah, we can find it later.

xgreenx added a commit that referenced this issue Oct 23, 2023
Closes #1239. Closes #1255.

---------

Co-authored-by: Green Baneling <[email protected]>
Co-authored-by: Brandon Vrooman <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants