-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prioritize transactions in banking stage by their compute unit price #25178
Prioritize transactions in banking stage by their compute unit price #25178
Conversation
sdk/src/compute_budget.rs
Outdated
@@ -35,8 +35,8 @@ pub enum ComputeBudgetInstruction { | |||
/// Request a specific maximum number of compute units the transaction is | |||
/// allowed to consume and an additional fee to pay. | |||
RequestUnits(u32), | |||
/// Additional fee in lamports to charge the payer, used for transaction | |||
/// prioritization | |||
/// Additional fee in "lamports per 10K CUs" to charge the payer, used for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can set a higher fee rate unit here (1M instead of 10k) because it gets us more granularity. It means the max fee rate is still u64::MAX (~2*10^19) / (10^6 CU) / (10^9 lamports per SOL) = ~20,000 SOL / CU which should be more than enough I hope 😅
/// Additional fee in "lamports per 10K CUs" to charge the payer, used for | |
/// Set a prioritization fee rate in "lamports per 1M CUs" to charge the payer, used for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think we should do about the behavior of RequestUnitsDeprecated::additional_fee
when the prioritization_fee_type_change
feature is activated? In your current implementation it will start being treated as a rate instead of a fee which I don't think we want.
This is a good question. My take is when Another option is to disallow |
I'm in favor of removing support when that feature is activated. @jackcmay what do you think? |
created issue #25201 to track this proposal separately |
1abd3e9
to
0084b87
Compare
@taozhu-chicago I'm not in favor of merging a PR that breaks another feature even if it's deprecated. I brought up the issue because I feel that we need to address it in this PR |
runtime/src/bank.rs
Outdated
.unwrap_or_default(); | ||
let prioritization_fee = if prioritization_fee_type_change { | ||
prioritization_fee_rate.saturating_mul(PRIORITIZATION_FEE_RATE_UNITS) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prioritization fee rate represents lamports per 10k CU so the prioritization fee should be equal to prioritization_fee_rate * compute_budget.max_units / PRIORITIZATION_FEE_RATE_UNITS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to protect against someone setting a very high prioritization rate but requesting 0 compute units as well
Thought that's a change outside scope of this PR. To be on the same page, the change will be within |
In the interest of time, I went ahead and addressed my review feedback in this PR: tao-stones#5 |
match fee_type { | ||
PrioritizationFeeType::Deprecated(fee) => Self { | ||
fee, | ||
priority: fee.saturating_div(compute_ticks), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason we don't round up for this compute_ticks
like we do in the PrioritizationFeeType::Rate
case?
In the deprecated case, should this just be a div by the max_compute_units
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason we don't round up for this compute_ticks like we do in the PrioritizationFeeType::Rate case?
No reason, we should be rounding up here. Nice catch.
In the deprecated case, should this just be a div by the max_compute_units?
Dividing my compute ticks gives us more granularity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed this in the latest patch, can you please take another close look at this logic?
There are a few remaining important things to decide:
|
I just pushed a commit which changes the fee rate unit from "lamports / compute unit ticks" to "micro-lamports / compute unit"
I'll work on a separate PR for this |
- update compute_budget::process_instruction function to take instruction iter to support sanitized versioned message; - updated runtime.md
…ports per 10K CUs
For 2 maybe could have a refund up to some sane % compute above the actual compute so that they don't play the system by setting the compute unreasonably high, getting prioritized, then getting away with a full refund? RPCs/wallets could just default to recommending a price to the user which is the % above the simulated compute that matches the % of error allowed for the refund. I guess compute could drop in the meantime but should be minimal? |
Good point. There are a number of edge cases here. Ideally you can't do anything interesting with zero compute, but right now many builtin programs don't charge compute units at all. Maybe the runtime should impose a min compute unit consumption for each tx of like 1000 compute units. I've updated #25231 with some TODO's to reflect these issues.
Agreed, we need a way for wallets to tell users what a reasonable prioritization fee is if the user wants to set one. I think that RPC nodes could provide a stream of the average compute unit price for txs in recently confirmed blocks. Then wallets can suggest prioritization parameters to the user and insert them into the transaction. A wallet integration guide and SDK support will need to be built for all of this. |
The other thing I was thinking of, which you might have seen me mention elsewhere is that I think the compute unit price might be better in multiples of the base compute unit price, or some other larger step size. If the compute unit pricing is too fine grained I think there will be lots of games happening where rpcs/bots will be trying to adjust the base amount with micro changes and so it might be even harder to predict and could just incur more spam. A 2x increase of base cost for even the largest compute transactions is still going to be very cheap from what I thought I had seen in the economics channel, so doesn't seem like a big deal? If people want transactions prioritized I assume they'll need to ramp up the cost at least 2x. The nice aspect of multiples is that it would be a lot easier for users too --- i.e. "I'm going to boost price 3x" (which will still be subcent atm for most txns) --- might make the pricing easier to track/predict across blocks too. |
@nikhayes sounds like you're suggesting we allow transactions to set a "fee multiplier" which can only be incremented in preset increments? |
Yeah basically all the user would ever need to do is set a whole number multiple of the compute unit price and the RPC would give them some estimate for multiplier needed and a safe estimate of the compute needed --> they would see the price in the wallet --> then when it gets executed they could get the rebate based on the multiplier * compute over-estimated up to some % error limit . Not sure if this works though? |
I like this a lot because it's very intuitive for users and the high step size is good for removing the incentive to spam. @aeyakovenko @taozhu-chicago @carllin thoughts? |
When I was originally thinking about it I was even thinking that it could be exponential step sizes (i..e compute unit price * 2^step)... probably too extreme though :) . Multiples might seem like big step sizes but should still be cheap and I think step sizes are needed or else normal users will constantly need to use fee-priority (because bots will just micro-increment the fee priority above those transactions with no-priority, which would be an annoying experience). |
Not to muddy up this thread too much, but given compute budget will be apparent up front when packets are checked, I remember @t-nelson being concerned that the fee-priority changes might end up increasing the base price across the entire block which results in local state not being priced efficiently anymore. @carllin also mentioned yesterday in one of the channels (network protocols I think) that maybe system programs, transfers, and other smaller compute programs should have their own thread/pipe so that they don't get stuck in batches behind all these expensive arb transactions. I think you can get a bit of what Trent and Carl want by separating transactions into queues based on compute ranges, and designating banking threads that deal with transactions/batches that fall within those certain compute ranges. The base fee-priority could be a bit less static across the block if the quoted priority min prices is specific to certain compute ranges. i.e. big arb transactions fall within a thread that deals with transactions that are 200k to 400k compute, and there's a lot of fee-priority activity there, but in another thread that deals with 1k to 10k compute transactions, things won't be as hot, so the fee-priority that users need to set can stay lower. I guess it's possible the MEV people would just reduce the size of the arb transactions though... but if there are account based congestion fees of some sort and the account based compute limits are in place hopefully that should help slow the arb transactions creeping into everything. I had mentioned an idea related to this before, and then sorta fused the idea with Ryoqun's span idea (which is sorta batch-like, but different from the current one). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some nits, thanks for addressing all the feedback
priority: fee.saturating_div(compute_ticks), | ||
}, | ||
PrioritizationFeeType::Deprecated(fee) => { | ||
let priority = if max_compute_units == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This got me thinking about the need for a minimum threshold that we can use to filter all of these from the buffer, i.e. return an error if less than this minimum
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I was thinking about that as well. I think that de-prioritizing those transactions is probably ok for now. But seems useful to have a min on the request units instructions as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This got me thinking about the need for a minimum threshold that we can use to filter all of these from the buffer, i.e. return an error if less than this minimum
There might be some important lessons in EIP1559's github issue
"It is recommended that transactions with the same priority fee be sorted by time the transaction was received to protect the network from spamming attacks where the attacker throws a bunch of transactions into the pending pool in order to ensure that at least one lands in a favorable position."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't exactly follow how that Ethereum issue translates to Solana, can you create a new issue and explain a bit more what you're suggesting?
really like both of these comments.
idk if this is an issue @nikhayes ? bc if you change to have this be fee rate instead of just fee, then you're only sorting on fee rate (fee rate is your fee / compute used) |
Ahh right right, makes sense, so just sort for priority using the base rate multiplier then and no way to game anything by manipulating compute limit and such? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback @carllin, I addressed it all in the latest commit.
priority: fee.saturating_div(compute_ticks), | ||
}, | ||
PrioritizationFeeType::Deprecated(fee) => { | ||
let priority = if max_compute_units == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I was thinking about that as well. I think that de-prioritizing those transactions is probably ok for now. But seems useful to have a min on the request units instructions as well.
…25178) * - get prioritization fee from compute_budget instruction; - update compute_budget::process_instruction function to take instruction iter to support sanitized versioned message; - updated runtime.md * update transaction fee calculation for prioritization fee rate as lamports per 10K CUs * review changes * fix test * fix a bpf test * fix bpf test * patch feedback * fix clippy * fix bpf test * feedback * rename prioritization fee rate to compute unit price * feedback Co-authored-by: Justin Starry <[email protected]> (cherry picked from commit b1b3702) # Conflicts: # sdk/src/feature_set.rs
…(backport #25178) (#25238) * Prioritize transactions in banking stage by their compute unit price (#25178) * - get prioritization fee from compute_budget instruction; - update compute_budget::process_instruction function to take instruction iter to support sanitized versioned message; - updated runtime.md * update transaction fee calculation for prioritization fee rate as lamports per 10K CUs * review changes * fix test * fix a bpf test * fix bpf test * patch feedback * fix clippy * fix bpf test * feedback * rename prioritization fee rate to compute unit price * feedback Co-authored-by: Justin Starry <[email protected]> (cherry picked from commit b1b3702) # Conflicts: # sdk/src/feature_set.rs * conflicts Co-authored-by: Tao Zhu <[email protected]> Co-authored-by: Justin Starry <[email protected]>
.process_instructions( | ||
message.program_instructions_iter(), | ||
false, | ||
false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jstarry @taozhu-chicago We are specifying the default unit feature as false here when calculating prioritization fee. Was this intentional? Doing so makes it hard to cleanup the default units per instruction feature because with the above process_instruction
has to support both. Any reason we can't use default units per ix when calculating prioritization fee?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like value false
was inherited from this commit when this parameter was initially introduced.
I agree it doesn't have to be false
, as matter of fact, priority related code assume true
for it to use default unit features
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to clarify, I think PR #26684 is valid, cross the cluster should use "default units per instruction", therefore feature J2Qd
can be cleared out.
that pr is not valid because it changes the prio fee without a feature. Need a new feature to pass the status of to |
Problems
ComputeBudget::SetPrioritizationFee
instruction doesn't let users directly set the relative priority of their transactions and it forces banking stage to do extra arithmetic to determine transaction prioritySummary of Changes
ComputeBudget::SetPrioritizationFee
toComputeBudget::SetComputeUnitPrice
where compute unit price is measured in "micro-lamports"prioritization_fee_type_change
feature toadd_set_compute_unit_price_ix
ComputeBudget::SetComputeUnitPrice
instruction (feature gated)Fixes #24615
(Updated) Feature Gate Issue: #25050