-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
config/mempool: define default parameters #590
Comments
Questions
ADR-067 introduced a priority mempool but there was a memory issue that was addressed in tendermint/tendermint#8944
These are the default values. Are these values causing issues @Bidon15 ?
ttl-duration = "0s"
ttl-num-blocks = 15
Links
Parameters
|
it was in celestiaorg/cosmos-sdk@b57bc39, when mamaki was released, the default was not to sign a pfd with the correct square sizes. This could result in a pfd getting stuck in the mempool. If a tx gets stuck in the mempool, then its difficult for a user to submit another tx, as the checktx state will tell them that the nonce they're using is incorrect. imo, we can revert this back to the default. |
I'm assuming tendermint v0.37, as v0.36 was deprecated? I thought the memory leak was fixed here? |
Sounds good, will do!
Agreed based on https://github.com/tendermint/tendermint/releases
🤦 I'm not sure how I missed that when scanning the v0.34.20 changelog. I was searching for the PR #8944 and somehow failed to see the bug fix for priority mempool. |
For testing purposes, to get 4MB blocks, you need 4 1mb txs to get it. I wonder why we don't allow 4MB from the get-go? Considering that we have 1Gb total |
If I understand correctly, you're proposing increasing |
relevant celestiaorg/celestia-core#867 |
With respect to the priority mempool, we should monitor tendermint/tendermint#9388. We likely don't want to enable it considering it seems on a path to deprecation. |
Yes
This is not a burning question, so 4Mb(or 8Mb) in 1 tx is more of a feature request from testing 👍 |
I'm a little stuck on gathering data for the mempool's I think we would need to run a mock network with multiple validators peering to each other and see what the impact is to validator performance if we start spamming transactions that are 8 MiB. @Bidon15 are you able to help me set something like this up? |
yes, agreed, having more confidence certainly requires more information and analysis. Any value we pick is essentially going blind, as no one else using this mempool has txs this large afaik. We do not know the actual effect on bandwidth usage for individual nodes or the network as a whole. We should prioritize obtaining that info, and in the mean time, increase the default limit if users require it. We should be able to get good starting data on this by monitoring the bandwidth used by each node after submitting transactions. I don't think we should automatically increase it to the entire square. Probably just 2MB or something until we have more confidence by performing more testing |
Sure. Let's discuss how we can make it as the foundational work is mostly done in tg already, where we spam like total 4MB blocks |
Update from synchronous params discussion during onsite: we want to use the prioritized mempool |
We should also set some default value for ttl-num-blocks as per celestiaorg/celestia-core#812 |
I updated the table above. |
Supersedes celestiaorg/celestia-core#885 Closes celestiaorg/celestia-core#812 Closes celestiaorg/celestia-core#867 Closes #590
Supersedes celestiaorg/celestia-core#885 Closes celestiaorg/celestia-core#812 Closes celestiaorg/celestia-core#867 Closes #590
Introduction ✌🏻
Here is the current params we have for mempool part
New Definition 📜
version = "v0"
max_txs_bytes
?ttl-*
figures?Notes 📝
Ref: #585
The text was updated successfully, but these errors were encountered: