Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QUIC server: Spawn for each stream #26086

Merged
merged 1 commit into from
Jun 27, 2022

Conversation

sakridge
Copy link
Member

@sakridge sakridge commented Jun 21, 2022

Problem

awaiting streams in order can produce head of line blocking when streams come in out of order

Summary of Changes

spawn for each stream

Fixes #26031

@sakridge sakridge force-pushed the spawn-each-stream branch 2 times, most recently from 31d7043 to b9e2a4a Compare June 24, 2022 14:21
@sakridge sakridge force-pushed the spawn-each-stream branch from b9e2a4a to 0c468a5 Compare June 24, 2022 14:29
@sakridge sakridge requested a review from pgarg66 June 25, 2022 10:19
@sakridge
Copy link
Member Author

bench results:
this PR:

[2022-06-25T10:43:31.843037850Z INFO  solana_streamer::quic::test] sent 200000x 200b packets with 2 threads: kpps: 27.40 mbps: 5.48
     send_streams took 7.3s received_packets took 7.3s
[2022-06-25T10:43:36.183029597Z INFO  solana_streamer::quic::test] sent 200000x 200b packets with 4 threads: kpps: 46.08 mbps: 9.22
     send_streams took 4.3s received_packets took 4.3s
[2022-06-25T10:43:38.993553940Z INFO  solana_streamer::quic::test] sent 200000x 200b packets with 8 threads: kpps: 71.16 mbps: 14.23
     send_streams took 2.8s received_packets took 2.8s
[2022-06-25T10:43:41.272188161Z INFO  solana_streamer::quic::test] sent 200000x 200b packets with 16 threads: kpps: 87.77 mbps: 17.55
     send_streams took 2.2s received_packets took 2.3s
[2022-06-25T10:43:43.195212551Z INFO  solana_streamer::quic::test] sent 200000x 200b packets with 32 threads: kpps: 104.00 mbps: 20.80
     send_streams took 1.9s received_packets took 1.9s

master:

[2022-06-25T10:48:47.868729557Z INFO  solana_streamer::quic::test] sent 200000x 200b packets with 2 threads: kpps: 26.19 mbps: 5.24
     send_streams took 7.6s received_packets took 7.6s
[2022-06-25T10:48:52.237211401Z INFO  solana_streamer::quic::test] sent 200000x 200b packets with 4 threads: kpps: 45.78 mbps: 9.16
     send_streams took 4.3s received_packets took 4.4s
[2022-06-25T10:48:54.843824592Z INFO  solana_streamer::quic::test] sent 200000x 200b packets with 8 threads: kpps: 76.73 mbps: 15.35
     send_streams took 2.6s received_packets took 2.6s
[2022-06-25T10:48:56.935298626Z INFO  solana_streamer::quic::test] sent 200000x 200b packets with 16 threads: kpps: 95.63 mbps: 19.13
     send_streams took 2.1s received_packets took 2.1s
[2022-06-25T10:48:58.771416279Z INFO  solana_streamer::quic::test] sent 200000x 200b packets with 32 threads: kpps: 108.93 mbps: 21.79
     send_streams took 1.8s received_packets took 1.8s
[2022-06-25T10:48:59.260755423Z INFO  solana_streamer::quic::test] after store took 489ms

@sakridge sakridge requested a review from lijunwangs June 25, 2022 10:57
@sakridge
Copy link
Member Author

sakridge commented Jun 25, 2022

This PR:
spawn-every-stream-bench-tps

Master:
bench-tps-master

Both compared, first master then this PR:
first-master-2nd-spawn-every-stream

@pgarg66
Copy link
Contributor

pgarg66 commented Jun 25, 2022

Do you think usage of tokio::time:timeout solved this problem already? Specifically this code in quic.rs

                            if let Ok(chunk) = tokio::time::timeout(
                                Duration::from_millis(WAIT_FOR_STREAM_TIMEOUT_MS),
                                stream.read_chunk(PACKET_DATA_SIZE, false),
                            )
                            .await

@lijunwangs
Copy link
Contributor

This PR: spawn-every-stream-bench-tps

Master: bench-tps-master

Both compared, first master then this PR: first-master-2nd-spawn-every-stream

This seems to make the transaction throughput and confirmation time more evenly distributed over time compared with master?

let stats = stats.clone();
let packet_sender = packet_sender.clone();
let last_update = last_update.clone();
tokio::spawn(async move {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The concern probably is we may spawn too many async tasks into the runtime in the server side compared with before. But I think that might be addressed by tweaking the maximum streams allowed for each connection if we find issues.

@sakridge sakridge merged commit 2cc48a6 into solana-labs:master Jun 27, 2022
@sakridge sakridge deleted the spawn-each-stream branch June 27, 2022 19:03
@sakridge sakridge added the v1.10 label Jun 27, 2022
mergify bot pushed a commit that referenced this pull request Jun 27, 2022
(cherry picked from commit 2cc48a6)
mergify bot added a commit that referenced this pull request Jun 27, 2022
(cherry picked from commit 2cc48a6)

Co-authored-by: sakridge <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

QUIC server introduces head-of-line blocking for a connection with multiple streams
3 participants