Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor pass feature status to deserialized packet via packet meta #31549

Conversation

tao-stones
Copy link
Contributor

@tao-stones tao-stones commented May 9, 2023

Problem

Passing current root_bank's feature status from packet_receiver to ImmutableDeserializedPacket is cumbersome. The plumbing go through many level call stacks (poc #31543).

Another possibility is to add an additional flag to Packet, which is set by root bank feature status at the moment of packet deserialization. So the feature status goes with packet instead of passing multiple layers of calls. It'd also make removing feature code easier when it's fully activated.

Summary of Changes

  • add round-compute-unit-price flag to Packet
  • PacketDeserializer uses bank_fork to get root_bank's feature set status

Fixes #

@codecov
Copy link

codecov bot commented May 9, 2023

Codecov Report

Merging #31549 (4ccf3a5) into master (3293ca4) will increase coverage by 0.0%.
The diff coverage is 100.0%.

@@           Coverage Diff           @@
##           master   #31549   +/-   ##
=======================================
  Coverage    81.3%    81.3%           
=======================================
  Files         733      733           
  Lines      204786   204801   +15     
=======================================
+ Hits       166517   166530   +13     
- Misses      38269    38271    +2     

@tao-stones tao-stones marked this pull request as ready for review May 9, 2023 05:38
@@ -390,7 +390,8 @@ impl BankingStage {
),
};

let mut packet_receiver = PacketReceiver::new(id, packet_receiver);
let mut packet_receiver =
PacketReceiver::new(id, packet_receiver, bank_forks.clone());
Copy link
Contributor Author

@tao-stones tao-stones May 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passing bank_forks to PacketReceiver then to PacketDeserializer, so packets can be flagged with working_bank's feature set.

@@ -55,7 +55,7 @@ impl ImmutableDeserializedPacket {

// drop transaction if prioritization fails.
let mut priority_details = sanitized_transaction
.get_transaction_priority_details()
.get_transaction_priority_details(packet.meta().round_compute_unit_price())
Copy link
Contributor Author

@tao-stones tao-stones May 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the particular feature flag is then read from packet's meta data, avoiding passing it through call stacks

let mut packet_clone = packet_batch[*packet_index].clone();
packet_clone
.meta_mut()
.set_round_compute_unit_price(round_compute_unit_price_enabled);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flag packet with root_bank feature set

@tao-stones tao-stones requested review from apfitzge and AshwinSekar May 9, 2023 05:44
@@ -42,9 +52,15 @@ impl PacketDeserializer {
capacity: usize,
) -> Result<ReceivePacketResults, RecvTimeoutError> {
let (packet_count, packet_batches) = self.receive_until(recv_timeout, capacity)?;

// get current root bank to get feature gate status
let _root_bank = self.bank_forks.read().unwrap().root_bank();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with this specific feature round_compute_unit_price_enabled, but should we be getting it from the root bank? It seems like we should be checking the feature gates of the bank we're currently building, or our parent?
If it's a feature gate whole cluster won't cross epoch on their root bank at the same time which would lead to different timing for this feature to be activated across the cluster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent point! So this PR is prerequisite for #31469, where the feature gate and its functions are implemented.

The context is: the feature gate is required (for #31469) between banking stage receives packet from sigverify, and before inserting it to priority-sorted queue (Transaction storage), because the prioritization (eg compute-unit-price) defined in compute-budget IX will be rounded down to nearest lamports when the feature is activated.

This PR is getting banking stage part of code ready for that.

Was thinking about picking one between root_bank over working_bank, sounds like working_bank is the better option for the purpose.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay makes sense. Since it only affects block packing seems like it wouldn't actually affect consensus? Basically it's okay if some members of the cluster are rounding down and some are not - so does it require a feature gate?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR only affects leader code; to be more precious, would only affect at what numerical value of priority the packets are sorted in queue. For example, if a transaction sets compute-unit-price to 1_000_900; Before feature is enabled, the transaction will be prioritized as 1_000_900, after feature activation, it would be prioritized as 1_000_000. But all this change are in leader's thread queue only.

Just to clarify, the PR does not change code behavior, it only does the plumbing work. (it hardcoded the flag as false as feature is not activated, which is current behavior. )

It is a prerequisite of #31469 , where feature is actually added, and its status will affect how transactions are charged with priority fee, hence could affect consensus.

@tao-stones
Copy link
Contributor Author

Checking the perf impact by read working_bank every receiving, and flag every received packet with feature status. Don't expect too much impact, runing banking-benching:
./cargo nightly run --release --manifest-path banking-bench/Cargo.toml

This branch:

[total_sent: 1152000, base_tx_count: 55296, txs_processed: 1187712, txs_landed: 1132416, total_us: 22317942, tx_total_us: 21732693]
{'name': 'banking_bench_total', 'median': '51617.66'}
{'name': 'banking_bench_tx_total', 'median': '53007.70'}
{'name': 'banking_bench_success_tx_total', 'median': '50740.16'}

Head of Master:

[total_sent: 1152000, base_tx_count: 55296, txs_processed: 1183488, txs_landed: 1128192, total_us: 21940348, tx_total_us: 21354720]
{'name': 'banking_bench_total', 'median': '52506.00'}
{'name': 'banking_bench_tx_total', 'median': '53945.92'}
{'name': 'banking_bench_success_tx_total', 'median': '51420.88'}

2. add packet meta flag;
3. prioritization_fee_cache read flag from its working bank directly;
4. immutable_deserialized_packet read flag from packet.meta;
2. packet_deserializer uses current root_bank from bank_fork to get feature status, use it to flag packet meta.
@tao-stones tao-stones force-pushed the refactor-pass-feature-status-to-deserialized-packet-via-packet-meta branch from 94fefbb to 4ccf3a5 Compare May 10, 2023 17:35
Copy link
Contributor

@AshwinSekar AshwinSekar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@tao-stones tao-stones merged commit 49f44f5 into solana-labs:master May 11, 2023
@tao-stones tao-stones deleted the refactor-pass-feature-status-to-deserialized-packet-via-packet-meta branch May 11, 2023 14:32
Copy link
Contributor

@apfitzge apfitzge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize this was already merged, but I'm following-up to push back on it a bit. I think we should always be hesistant to add feature usage to things that do not affect consensus - I think it adds complexity that is not necessary, edge cases etc.

Leader's are already free to do whatever they want in terms of buffer prioritization - it's not enforcable by consensus. As an alternative, why not just start rounding down always for buffer prioritization, and we calculate actual fee based using the bank the tx is committed to.

To the best of my knowledge, there's no plans to backport this. So we have to wait for v1.16+ to activate anyway?

Additionally, the current method is already going to lead to some oddities around the feature activation:

gantt
    title Receiving Window Race
    dateFormat HH:mm
    axisFormat %H:%M

    section Banks
    Bank 1 (Not Leader): b1, 00:00, 40m
    Epoch Boundary: milestone, after b1
    Bank 2 (Leader): after b1, 40m

    section Receiving
    Receiving 1: 00:05, 5m
    Receiving 2: 00:35, 3m
    Receiving 3: 00:42, 3m
Loading

Packets from receive windows 1/2 will be unrounded, while packets from receive 3 will be rounded.

@tao-stones
Copy link
Contributor Author

Valid point, we chatted offline, agreed that if leader started to round down before feature gate (effectively passing hardcoded true at runtime/src/transaction_priority_details.rs Ln28), user currently sets compute-unit-price lower than 1 lamport will be charged for a priority fee yet without being prioritized (since it'd round down to zero). That'd break current behavior, needs extra communication.

I think it is better to round down at both leader and fee calculation by same feature gate to be consistent in behavior, and focus the communication on this change is about incentivizing accurate compute-unit-limit economically.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants