-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Introduce block authorship soft deadline #9663
Conversation
What's the smallest size a tx can be? Maybe we should be checking if block_size + MIN_POSSIBLE_TX_SIZE > block_size_limit because there's always going to be a little bit left over but there may be zero chance that any tx can realistically fit in there. |
That's a very good point, but it's up to the runtime to decide and we currently have no good way to ask the runtime about that. Also note that any additional heuristic we add, it should be added for both the size limit and the "resource/weight" limit to make sure we cater for all kinds of runtimes. |
@bkchr can you take a look if soft deadline is okay, or if you'd rather prefer to add transactions up to the hard deadline? |
What about failed test:
? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's below
); | ||
|
||
// when | ||
let deadline = time::Duration::from_secs(300); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as this number is hardcoded:
- should it be explained in few words why 300 and when may user want to tweak it?
- or maybe make const out of it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still I don't know why this value for test. To me tests are documentation, and I don't know how to read it -> is it random? Or semi-random? etc. Overall, yes, it's not blocking from approval, just something imho nice to polish.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing tests as docs is really a bad idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// Create a proposal. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The deadline is that high to not have it triggered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing tests as docs is really a bad idea.
I see we come from different background and experience ;).
Otherwise, thank you for pointers and links.
Again, I agree with you that this is just a detail, so whatever will be fine with me (may be even ignored) and as you see my approval for PR was already given regardless :).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see we come from different background and experience ;).
I mean, I would not say that I do not read tests from time to time, but this normally means that the docs are shit and I need some examples on how to use it :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:). I consider that good tests have a lot of value. They can document expectations in non-ambiguous way (contrary to human natural language), and keep assumptions in check (e.g. setting time limits with not too much slack, can help you in detecting if someone accidentally introduces performance regression (that may be not caught by benchmarking or make diagnosis easier).... maybe not in this particular piece of code but in general).
Ideally, tests may cover expected and edge cases, so I can read from them who edge cases may look like, but yeah, not always worth the effort.
Regarding code review process, asking questions about tests I consider practice motivating to rethink hidden assumptions, and if we can do better. E.g. here, maybe 300 is 10x too much and maybe we can go 10x lower? Maybe it will help to catch other issues? Or maybe it's not worth it and we keep 300 and move on. :)
TL;DR- to me documentation and tests have potential (but not always have to) document different things, from different angles :).
); | ||
|
||
// when | ||
let deadline = time::Duration::from_secs(300); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a test.
@@ -386,6 +390,13 @@ where | |||
MAX_SKIPPED_TRANSACTIONS - skipped, | |||
); | |||
continue | |||
} else if (self.now)() < soft_deadline { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} else if (self.now)() < soft_deadline { | |
} else if now < soft_deadline { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder, isn't actually original (self.now)()
intentional to get more recent reading at a time of test?
Otherwise, to me both are good enough to not block approval.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a good suggestion, there is nothing intensive happening between these calls, and it's worth saving this one extra syscall. That was my intention when I introduced now
variable initially as well, but somehow lost it in all the refactorings.
This is a flaky test |
bot merge |
Trying merge. |
* master: (125 commits) Update multiple dependencies (#9936) Speed up timestamp generation when logging (#9933) First word should be Substrate not Polkadot (#9935) Improved file not found error message (#9931) don't read events in elections anymore. (#9898) Remove incorrect sanity check (#9924) Require crypto scheme for `insert-key` (#9909) chore: refresh of the substrate_builder image (#9808) Introduce block authorship soft deadline (#9663) Rework Transaction Priority calculation (#9834) Do not propagate host RUSTFLAGS when checking for WASM toolchain (#9926) Small quoting comment fix (#9927) add clippy to CI (#9694) Ensure BeforeBestBlockBy voting rule accounts for base (#9920) rm `.maintain` lock (#9919) Downstream `node-template` pull (#9915) Implement core::fmt::Debug for BoundedVec (#9914) Quickly skip invalid transactions during block authorship. (#9789) Add SS58 prefix for Automata (#9805) Clean up sc-peerset (#9806) ...
@tomusdrw it looks great. Would it be possible to have either the |
@crystalin why do you often have to skip transactions? |
@bkchr In Ethereum maintnet the logic is to try executing the transaction even if the current block gas + the current gas_limit is over the block maximum gas. We observed however that when someone spam few transactions with very high gas_limit, it can prevent the block from producing more than 8 transactions per block. This PR improves the situation making sure we would use a least 1/2 of the time in block production no matter if many skipped transactions |
@crystalin if you prepare a pr, we can merge it. |
@bkchr What would the preferred way ? allow to control the MAX_SKIPPED_TRANSACTIONS or the soft_deadline (or both) ? |
Both? IDK. What do you need? |
I think I find it a bit surprising that the current hardcoded soft deadline is insufficient though. If we assume the block time of 6s and hard deadline of 3s, having around 1.5s (soft deadline) to produce a block should be more than enough - if block production/import takes more than this I feel that maybe the client is not running on powerful enough hardware or the maximal block weight is quite high. |
@tomusdrw the issue is not with blockchain but with parachain: |
I see, making |
Currently, when packing transactions to a block we have few conditions when we decide the block is ready to be signed by the consensus engine and gossiped to the network:
The first 3 I'd consider "regular" conditions, the last one is rather a safety valve, ensuring that the block producer does not miss it's slot due to some irregular slowness.
However, since optimal block packing is a difficult problem (i.e. producing a block with optimal utilisation, while maximising the cumulative
priority
of all transactions in that block), we use a very simple greedy heurstic currently, with a twist, that whenever we run into first transaction that reports resources exhaustion, we attempt to insert at leastMAX_SKIPPED_TRANSACTIONS
more to the block before concluding it's actually full.This heuristic is obviously sub-optimal and can be gamed, hence the PR introduces another variant of the heuristic to potentially increase block utilisation, but without reaching the hard
deadline
defined by the consensus engine.The PR introduces "soft deadline" (half of the hard deadline time). Before soft deadline is reached we can try as many transactions as desired from the transaction pool, even if they report resources exhaustion. After soft deadline we switch to the previous heuristic of trying at most
MAX_SKIPPED_TRANSACTIONS
.