Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prioritize transactions in banking stage by their compute unit price #25178

Merged
merged 12 commits into from
May 16, 2022

Conversation

tao-stones
Copy link
Contributor

@tao-stones tao-stones commented May 12, 2022

Problems

  • The ComputeBudget::SetPrioritizationFee instruction doesn't let users directly set the relative priority of their transactions and it forces banking stage to do extra arithmetic to determine transaction priority
  • Banking stage doesn't calculate transaction priority yet.

Summary of Changes

  • Migrated ComputeBudget::SetPrioritizationFee to ComputeBudget::SetComputeUnitPrice where compute unit price is measured in "micro-lamports"
  • Renamed the prioritization_fee_type_change feature to add_set_compute_unit_price_ix
  • Banking stage now prioritizes transactions based on compute budget instructions (not feature gated)
  • Updated prioritization fee calculations for instructions that use the ComputeBudget::SetComputeUnitPrice instruction (feature gated)
  • Updated docs

Fixes #24615
(Updated) Feature Gate Issue: #25050

sdk/src/compute_budget.rs Outdated Show resolved Hide resolved
@@ -35,8 +35,8 @@ pub enum ComputeBudgetInstruction {
/// Request a specific maximum number of compute units the transaction is
/// allowed to consume and an additional fee to pay.
RequestUnits(u32),
/// Additional fee in lamports to charge the payer, used for transaction
/// prioritization
/// Additional fee in "lamports per 10K CUs" to charge the payer, used for
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can set a higher fee rate unit here (1M instead of 10k) because it gets us more granularity. It means the max fee rate is still u64::MAX (~2*10^19) / (10^6 CU) / (10^9 lamports per SOL) = ~20,000 SOL / CU which should be more than enough I hope 😅

Suggested change
/// Additional fee in "lamports per 10K CUs" to charge the payer, used for
/// Set a prioritization fee rate in "lamports per 1M CUs" to charge the payer, used for

runtime/src/bank.rs Outdated Show resolved Hide resolved
Copy link
Member

@jstarry jstarry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think we should do about the behavior of RequestUnitsDeprecated::additional_fee when the prioritization_fee_type_change feature is activated? In your current implementation it will start being treated as a rate instead of a fee which I don't think we want.

@tao-stones
Copy link
Contributor Author

What do you think we should do about the behavior of RequestUnitsDeprecated::additional_fee when the prioritization_fee_type_change feature is activated? In your current implementation it will start being treated as a rate instead of a fee which I don't think we want.

This is a good question. My take is when prioritization_fee_type_change is activated, it means the type/meaning of additional fee has changed to fee-rate, regardless the version of ix. Leaving it to call site to handle the diff. Such as in bank.rs it treats it differently based on the feature's activation. But i don't know if this is suitable for app devs.

Another option is to disallow Deprecated if prioritization_fee_type_change is on, simply return error from compute_budget::process_instruction().

@jstarry
Copy link
Member

jstarry commented May 13, 2022

I'm in favor of removing support when that feature is activated. @jackcmay what do you think?

@tao-stones
Copy link
Contributor Author

removing support when that feature is activated

created issue #25201 to track this proposal separately

@tao-stones tao-stones force-pushed the get-lamports-per-cus-from-ix branch from 1abd3e9 to 0084b87 Compare May 14, 2022 04:21
@jstarry
Copy link
Member

jstarry commented May 14, 2022

created issue #25201 to track this proposal separately

@taozhu-chicago I'm not in favor of merging a PR that breaks another feature even if it's deprecated. I brought up the issue because I feel that we need to address it in this PR

.unwrap_or_default();
let prioritization_fee = if prioritization_fee_type_change {
prioritization_fee_rate.saturating_mul(PRIORITIZATION_FEE_RATE_UNITS)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prioritization fee rate represents lamports per 10k CU so the prioritization fee should be equal to prioritization_fee_rate * compute_budget.max_units / PRIORITIZATION_FEE_RATE_UNITS

Copy link
Member

@jstarry jstarry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to protect against someone setting a very high prioritization rate but requesting 0 compute units as well

@tao-stones
Copy link
Contributor Author

created issue #25201 to track this proposal separately

@taozhu-chicago I'm not in favor of merging a PR that breaks another feature even if it's deprecated. I brought up the issue because I feel that we need to address it in this PR

Thought that's a change outside scope of this PR. To be on the same page, the change will be within compute_budget::process_instructions() function to return invalid_instruction_data_error for RequestUnitsDeprecated if prioritization_fee_type_change is true, is it? Maybe can do a separate PR for it, merge that one before, or together with this?

@jstarry
Copy link
Member

jstarry commented May 14, 2022

In the interest of time, I went ahead and addressed my review feedback in this PR: tao-stones#5

@jstarry jstarry added the feature-gate Pull Request adds or modifies a runtime feature gate label May 14, 2022
docs/src/developing/programming-model/runtime.md Outdated Show resolved Hide resolved
program-runtime/src/prioritization_fee.rs Outdated Show resolved Hide resolved
program-runtime/src/compute_budget.rs Outdated Show resolved Hide resolved
match fee_type {
PrioritizationFeeType::Deprecated(fee) => Self {
fee,
priority: fee.saturating_div(compute_ticks),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason we don't round up for this compute_ticks like we do in the PrioritizationFeeType::Rate case?

In the deprecated case, should this just be a div by the max_compute_units?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason we don't round up for this compute_ticks like we do in the PrioritizationFeeType::Rate case?

No reason, we should be rounding up here. Nice catch.

In the deprecated case, should this just be a div by the max_compute_units?

Dividing my compute ticks gives us more granularity

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed this in the latest patch, can you please take another close look at this logic?

runtime/src/bank.rs Show resolved Hide resolved
sdk/src/compute_budget.rs Outdated Show resolved Hide resolved
@jstarry
Copy link
Member

jstarry commented May 15, 2022

There are a few remaining important things to decide:

  1. Units of prioritization fee rate

    I think that "compute ticks" are less intuitive than a sub-lamport unit. I propose that we switch from measuring the fee rate in lamports per 10k cu, and instead measure it in micro-lamports (10^-6) per cu. Using a metric prefix is more intuitive to understand than an arbitrary tick size.

    Before:

    • transaction senders set the compute unit limit and the compute tick price (measured in lamports)
    • ... need to know that there are 10^4 compute units in one compute tick
    • ... and then know that the fee is calculated by multiplying the limit by the compute tick price, dividing that result by 10^4, and then round up to the nearest lamport
    • ... as opposed to calculating the fee by first rounding compute units to the nearest tick and then multiplying that result by the compute tick price

    After:

    • transaction senders set the compute unit limit and compute unit price (measured in micro-lamports)
    • ... need to know that there are 10^6 micro-lamports in one lamport (hopefully intuitive with metric prefix)
    • ... and calculate the fee by multiplying the limit by the price and round the micro-lamport fee to the nearest lamport
  2. Fee credits for unused compute units

    Since unused compute doesn't result in a credit back to the fee payer, transaction senders will be incentivized to request as few compute units as possible. I believe this will lead to a poor user experience where too much responsibility falls on users to determine how much compute their tx will use. It's very hard to accurately predict compute because your tx could be front-run and computations could get slightly more expensive between the time you simulate your tx and it is actually processed. In that case, your tx will fail and you will have lost out on all your fees. I think it's much better to allow senders to set a conservatively high amount of compute with the expectation that unused compute will be refunded.

@jstarry
Copy link
Member

jstarry commented May 15, 2022

There are a few remaining important things to decide:

  1. Units of prioritization fee rate

I just pushed a commit which changes the fee rate unit from "lamports / compute unit ticks" to "micro-lamports / compute unit"

  1. Fee credits for unused compute units

I'll work on a separate PR for this

- update compute_budget::process_instruction function to take instruction iter to support sanitized versioned message;
- updated runtime.md
@nikhayes
Copy link

nikhayes commented May 15, 2022

There are a few remaining important things to decide:

  1. Units of prioritization fee rate
    I think that "compute ticks" are less intuitive than a sub-lamport unit. I propose that we switch from measuring the fee rate in lamports per 10k cu, and instead measure it in micro-lamports (10^-6) per cu. Using a metric prefix is more intuitive to understand than an arbitrary tick size.
    Before:

    • transaction senders set the compute unit limit and the compute tick price (measured in lamports)
    • ... need to know that there are 10^4 compute units in one compute tick
    • ... and then know that the fee is calculated by multiplying the limit by the compute tick price, dividing that result by 10^4, and then round up to the nearest lamport
    • ... as opposed to calculating the fee by first rounding compute units to the nearest tick and then multiplying that result by the compute tick price

    After:

    • transaction senders set the compute unit limit and compute unit price (measured in micro-lamports)
    • ... need to know that there are 10^6 micro-lamports in one lamport (hopefully intuitive with metric prefix)
    • ... and calculate the fee by multiplying the limit by the price and round the micro-lamport fee to the nearest lamport
  2. Fee credits for unused compute units
    Since unused compute doesn't result in a credit back to the fee payer, transaction senders will be incentivized to request as few compute units as possible. I believe this will lead to a poor user experience where too much responsibility falls on users to determine how much compute their tx will use. It's very hard to accurately predict compute because your tx could be front-run and computations could get slightly more expensive between the time you simulate your tx and it is actually processed. In that case, your tx will fail and you will have lost out on all your fees. I think it's much better to allow senders to set a conservatively high amount of compute with the expectation that unused compute will be refunded.

For 2 maybe could have a refund up to some sane % compute above the actual compute so that they don't play the system by setting the compute unreasonably high, getting prioritized, then getting away with a full refund? RPCs/wallets could just default to recommending a price to the user which is the % above the simulated compute that matches the % of error allowed for the refund. I guess compute could drop in the meantime but should be minimal?

@jstarry
Copy link
Member

jstarry commented May 15, 2022

For 2 maybe could have a refund up to some sane % compute above the actual compute so that they don't play the system by setting the compute unreasonably high, getting prioritized, then expecting a full refund?

Good point. There are a number of edge cases here. Ideally you can't do anything interesting with zero compute, but right now many builtin programs don't charge compute units at all. Maybe the runtime should impose a min compute unit consumption for each tx of like 1000 compute units. I've updated #25231 with some TODO's to reflect these issues.

Also, I think the way to determine fee priority will seem confusing to a lot of users? I'm not sure if adding fee priority will become a normal thing for most, but if so, doing the mental calculations/predictions might take more time than the actual time it should take a transaction to land --- and by then maybe fee priority rates have already shifted?

Agreed, we need a way for wallets to tell users what a reasonable prioritization fee is if the user wants to set one. I think that RPC nodes could provide a stream of the average compute unit price for txs in recently confirmed blocks. Then wallets can suggest prioritization parameters to the user and insert them into the transaction. A wallet integration guide and SDK support will need to be built for all of this.

@nikhayes
Copy link

nikhayes commented May 15, 2022

For 2 maybe could have a refund up to some sane % compute above the actual compute so that they don't play the system by setting the compute unreasonably high, getting prioritized, then expecting a full refund?

Good point. There are a number of edge cases here. Ideally you can't do anything interesting with zero compute, but right now many builtin programs don't charge compute units at all. Maybe the runtime should impose a min compute unit consumption for each tx of like 1000 compute units. I've updated #25231 with some TODO's to reflect these issues.

Also, I think the way to determine fee priority will seem confusing to a lot of users? I'm not sure if adding fee priority will become a normal thing for most, but if so, doing the mental calculations/predictions might take more time than the actual time it should take a transaction to land --- and by then maybe fee priority rates have already shifted?

Agreed, we need a way for wallets to tell users what a reasonable prioritization fee is if the user wants to set one. I think that RPC nodes could provide a stream of the average compute unit price for txs in recently confirmed blocks. Then wallets can suggest prioritization parameters to the user and insert them into the transaction. A wallet integration guide and SDK support will need to be built for all of this.

The other thing I was thinking of, which you might have seen me mention elsewhere is that I think the compute unit price might be better in multiples of the base compute unit price, or some other larger step size. If the compute unit pricing is too fine grained I think there will be lots of games happening where rpcs/bots will be trying to adjust the base amount with micro changes and so it might be even harder to predict and could just incur more spam. A 2x increase of base cost for even the largest compute transactions is still going to be very cheap from what I thought I had seen in the economics channel, so doesn't seem like a big deal? If people want transactions prioritized I assume they'll need to ramp up the cost at least 2x. The nice aspect of multiples is that it would be a lot easier for users too --- i.e. "I'm going to boost price 3x" (which will still be subcent atm for most txns) --- might make the pricing easier to track/predict across blocks too.

@jstarry
Copy link
Member

jstarry commented May 15, 2022

@nikhayes sounds like you're suggesting we allow transactions to set a "fee multiplier" which can only be incremented in preset increments?

@nikhayes
Copy link

nikhayes commented May 15, 2022

@nikhayes sounds like you're suggesting we allow transactions to set a "fee multiplier" which can only be incremented in preset increments?

Yeah basically all the user would ever need to do is set a whole number multiple of the compute unit price and the RPC would give them some estimate for multiplier needed and a safe estimate of the compute needed --> they would see the price in the wallet --> then when it gets executed they could get the rebate based on the multiplier * compute over-estimated up to some % error limit . Not sure if this works though?

@jstarry
Copy link
Member

jstarry commented May 15, 2022

Yeah basically all the user would ever need to do is set a whole number multiple of the compute unit price and the RPC would give them some estimate for multiplier needed and a safe estimate of the compute needed --> they would see the price in the wallet --> then when it gets executed they could get the rebate based on the multiplier * compute over-estimated up to some % error limit . Not sure if this works though?

I like this a lot because it's very intuitive for users and the high step size is good for removing the incentive to spam. @aeyakovenko @taozhu-chicago @carllin thoughts?

@nikhayes
Copy link

nikhayes commented May 15, 2022

Yeah basically all the user would ever need to do is set a whole number multiple of the compute unit price and the RPC would give them some estimate for multiplier needed and a safe estimate of the compute needed --> they would see the price in the wallet --> then when it gets executed they could get the rebate based on the multiplier * compute over-estimated up to some % error limit . Not sure if this works though?

I like this a lot because it's very intuitive for users and the high step size is good for removing the incentive to spam. @aeyakovenko @taozhu-chicago @carllin thoughts?

When I was originally thinking about it I was even thinking that it could be exponential step sizes (i..e compute unit price * 2^step)... probably too extreme though :) . Multiples might seem like big step sizes but should still be cheap and I think step sizes are needed or else normal users will constantly need to use fee-priority (because bots will just micro-increment the fee priority above those transactions with no-priority, which would be an annoying experience).

@nikhayes
Copy link

nikhayes commented May 15, 2022

Yeah basically all the user would ever need to do is set a whole number multiple of the compute unit price and the RPC would give them some estimate for multiplier needed and a safe estimate of the compute needed --> they would see the price in the wallet --> then when it gets executed they could get the rebate based on the multiplier * compute over-estimated up to some % error limit . Not sure if this works though?

I like this a lot because it's very intuitive for users and the high step size is good for removing the incentive to spam. @aeyakovenko @taozhu-chicago @carllin thoughts?

Not to muddy up this thread too much, but given compute budget will be apparent up front when packets are checked, I remember @t-nelson being concerned that the fee-priority changes might end up increasing the base price across the entire block which results in local state not being priced efficiently anymore. @carllin also mentioned yesterday in one of the channels (network protocols I think) that maybe system programs, transfers, and other smaller compute programs should have their own thread/pipe so that they don't get stuck in batches behind all these expensive arb transactions.

I think you can get a bit of what Trent and Carl want by separating transactions into queues based on compute ranges, and designating banking threads that deal with transactions/batches that fall within those certain compute ranges. The base fee-priority could be a bit less static across the block if the quoted priority min prices is specific to certain compute ranges. i.e. big arb transactions fall within a thread that deals with transactions that are 200k to 400k compute, and there's a lot of fee-priority activity there, but in another thread that deals with 1k to 10k compute transactions, things won't be as hot, so the fee-priority that users need to set can stay lower. I guess it's possible the MEV people would just reduce the size of the arb transactions though... but if there are account based congestion fees of some sort and the account based compute limits are in place hopefully that should help slow the arb transactions creeping into everything.

I had mentioned an idea related to this before, and then sorta fused the idea with Ryoqun's span idea (which is sorta batch-like, but different from the current one).
#23548 (comment)

carllin
carllin previously approved these changes May 15, 2022
Copy link
Contributor

@carllin carllin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some nits, thanks for addressing all the feedback

sdk/src/compute_budget.rs Outdated Show resolved Hide resolved
program-runtime/src/compute_budget.rs Show resolved Hide resolved
priority: fee.saturating_div(compute_ticks),
},
PrioritizationFeeType::Deprecated(fee) => {
let priority = if max_compute_units == 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This got me thinking about the need for a minimum threshold that we can use to filter all of these from the buffer, i.e. return an error if less than this minimum

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I was thinking about that as well. I think that de-prioritizing those transactions is probably ok for now. But seems useful to have a min on the request units instructions as well.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This got me thinking about the need for a minimum threshold that we can use to filter all of these from the buffer, i.e. return an error if less than this minimum

There might be some important lessons in EIP1559's github issue

"It is recommended that transactions with the same priority fee be sorted by time the transaction was received to protect the network from spamming attacks where the attacker throws a bunch of transactions into the pending pool in order to ensure that at least one lands in a favorable position."

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't exactly follow how that Ethereum issue translates to Solana, can you create a new issue and explain a bit more what you're suggesting?

program-runtime/src/prioritization_fee.rs Outdated Show resolved Hide resolved
docs/src/developing/programming-model/runtime.md Outdated Show resolved Hide resolved
@buffalu
Copy link
Contributor

buffalu commented May 15, 2022

There are a few remaining important things to decide:

  1. Units of prioritization fee rate
    I think that "compute ticks" are less intuitive than a sub-lamport unit. I propose that we switch from measuring the fee rate in lamports per 10k cu, and instead measure it in micro-lamports (10^-6) per cu. Using a metric prefix is more intuitive to understand than an arbitrary tick size.
    Before:

    • transaction senders set the compute unit limit and the compute tick price (measured in lamports)
    • ... need to know that there are 10^4 compute units in one compute tick
    • ... and then know that the fee is calculated by multiplying the limit by the compute tick price, dividing that result by 10^4, and then round up to the nearest lamport
    • ... as opposed to calculating the fee by first rounding compute units to the nearest tick and then multiplying that result by the compute tick price

    After:

    • transaction senders set the compute unit limit and compute unit price (measured in micro-lamports)
    • ... need to know that there are 10^6 micro-lamports in one lamport (hopefully intuitive with metric prefix)
    • ... and calculate the fee by multiplying the limit by the price and round the micro-lamport fee to the nearest lamport
  2. Fee credits for unused compute units
    Since unused compute doesn't result in a credit back to the fee payer, transaction senders will be incentivized to request as few compute units as possible. I believe this will lead to a poor user experience where too much responsibility falls on users to determine how much compute their tx will use. It's very hard to accurately predict compute because your tx could be front-run and computations could get slightly more expensive between the time you simulate your tx and it is actually processed. In that case, your tx will fail and you will have lost out on all your fees. I think it's much better to allow senders to set a conservatively high amount of compute with the expectation that unused compute will be refunded.

really like both of these comments.

  1. makes wrapping head around things way easier.
  2. seems ideal to refund for fees based on actual compute units. i imagine wallets can call simulate_transaction, add a buffer (let's say 20%), and use that as requested compute + fee calculation.

For 2 maybe could have a refund up to some sane % compute above the actual compute so that they don't play the system by setting the compute unreasonably high, getting prioritized, then expecting a full refund?

idk if this is an issue @nikhayes ? bc if you change to have this be fee rate instead of just fee, then you're only sorting on fee rate (fee rate is your fee / compute used)

@nikhayes
Copy link

nikhayes commented May 15, 2022

idk if this is an issue @nikhayes ? bc if you change to have this be fee rate instead of just fee, then you're only sorting on fee rate (fee rate is your fee / compute used)

Ahh right right, makes sense, so just sort for priority using the base rate multiplier then and no way to game anything by manipulating compute limit and such?

Copy link
Member

@jstarry jstarry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback @carllin, I addressed it all in the latest commit.

priority: fee.saturating_div(compute_ticks),
},
PrioritizationFeeType::Deprecated(fee) => {
let priority = if max_compute_units == 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I was thinking about that as well. I think that de-prioritizing those transactions is probably ok for now. But seems useful to have a min on the request units instructions as well.

program-runtime/src/compute_budget.rs Show resolved Hide resolved
@mergify mergify bot dismissed carllin’s stale review May 16, 2022 02:22

Pull request has been modified.

@jstarry jstarry changed the title get prioritization-fee-rate from compute-budget for min-max heap Prioritize transactions in banking stage based on their compute unit price May 16, 2022
@jstarry jstarry changed the title Prioritize transactions in banking stage based on their compute unit price Prioritize transactions in banking stage by their compute unit price May 16, 2022
@jstarry jstarry merged commit b1b3702 into solana-labs:master May 16, 2022
mergify bot pushed a commit that referenced this pull request May 16, 2022
…25178)

* - get prioritization fee from compute_budget instruction;
- update compute_budget::process_instruction function to take instruction iter to support sanitized versioned message;
- updated runtime.md

* update transaction fee calculation for prioritization fee rate as lamports per 10K CUs

* review changes

* fix test

* fix a bpf test

* fix bpf test

* patch feedback

* fix clippy

* fix bpf test

* feedback

* rename prioritization fee rate to compute unit price

* feedback

Co-authored-by: Justin Starry <[email protected]>
(cherry picked from commit b1b3702)

# Conflicts:
#	sdk/src/feature_set.rs
jstarry added a commit that referenced this pull request May 16, 2022
…(backport #25178) (#25238)

* Prioritize transactions in banking stage by their compute unit price (#25178)

* - get prioritization fee from compute_budget instruction;
- update compute_budget::process_instruction function to take instruction iter to support sanitized versioned message;
- updated runtime.md

* update transaction fee calculation for prioritization fee rate as lamports per 10K CUs

* review changes

* fix test

* fix a bpf test

* fix bpf test

* patch feedback

* fix clippy

* fix bpf test

* feedback

* rename prioritization fee rate to compute unit price

* feedback

Co-authored-by: Justin Starry <[email protected]>
(cherry picked from commit b1b3702)

# Conflicts:
#	sdk/src/feature_set.rs

* conflicts

Co-authored-by: Tao Zhu <[email protected]>
Co-authored-by: Justin Starry <[email protected]>
@tao-stones tao-stones deleted the get-lamports-per-cus-from-ix branch May 16, 2022 15:57
.process_instructions(
message.program_instructions_iter(),
false,
false,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jstarry @taozhu-chicago We are specifying the default unit feature as false here when calculating prioritization fee. Was this intentional? Doing so makes it hard to cleanup the default units per instruction feature because with the above process_instruction has to support both. Any reason we can't use default units per ix when calculating prioritization fee?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like value false was inherited from this commit when this parameter was initially introduced.

I agree it doesn't have to be false, as matter of fact, priority related code assume true for it to use default unit features

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify, I think PR #26684 is valid, cross the cluster should use "default units per instruction", therefore feature J2Qd can be cleared out.

@jackcmay
Copy link
Contributor

that pr is not valid because it changes the prio fee without a feature. Need a new feature to pass the status of to process_message

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-gate Pull Request adds or modifies a runtime feature gate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add actual computation of transaction weight/fee-per-cu
6 participants