-
Notifications
You must be signed in to change notification settings - Fork 542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor for pluggable congestion control algorithtm #1949
Refactor for pluggable congestion control algorithtm #1949
Conversation
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
For the CLOG stuff, you should just need to run the |
Windows build seems to be spitting out a lot of errors too:
|
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
One build error:
|
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
@@ -0,0 +1,106 @@ | |||
/*++ | |||
|
|||
Copyright (c) Microsoft Corporation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need a separate file for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to seperate the definition of QUIC_CONGESTION_CONTROL_CUBIC
and all cubic functions so that:
- We can include
cubic.h
incongestion_control.h
for usingQUIC_CONGESTION_CONTROL_CUBIC
inunion QUIC_CONGESTION_CONTROL_CONTEXT
. This union is to find the max context size for all congestion control algorithms. - To avoid circular reference for
cubic_impl.h
andcongestion_control.h
.
|
||
#include "precomp.h" | ||
#ifdef QUIC_CLOG | ||
#include "cubic_impl.c.clog.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just cubic.c?
Windows kernel build failed. I think you missed some project file updates for file name changes.
|
/azp run |
Commenter does not have sufficient privileges for PR 1949 in repo microsoft/msquic |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making this change! It's been on our to-do list for a while!
@Wizmann what CC algorithm are you working on? Is it BBR? v1 or v2? We likely wouldn't accept BBRv1 code. |
It's great to see this PR has been completed. It makes my day! Thanks! The CC algo I'm working on is BBRv1. As BBR v2 is still in alpha/preview (link), I think it should be a safe play to use a more mature CC algo, especially in a prod environment. Besides, mvfst and chromimum both have the support of BBRv1. May I know the reason why MsQuic doesn't like BBRv1? |
BBRv1 has known problems (doesn't fairly share with loss-based CC algorithms) that would prevent us from actually using it in production. These issues are why BBRv2 was created. |
Per our tests, the bandwidth fairness and latency fairness should be OK if the connections don't reach the limit of the bottleneck. It's a dilemma that we have to choose from a CC algo with fairness flaw (BBRv1) and a CC algo still in alpha (BBRv2). From my point of view, BBRv1 still has many users and capable for multiple scenarios. |
But isn't the whole point of CC to deal with what happens if you're bottlenecked by the network? You have to assume you reach this bottleneck. If you have the code already, feel free to send the PR and we can discuss more; but if you're going to dedicate significant time, we highly recommend BBRv2 instead. |
Sometimes it's not true. Say we have a LFN, 200ms RTT and 0.2% packet loss, the best we can have for Cubic is 11MB/s. But for BBRv1, we can have 115MB/s at most. Appearently, 11MB/s for Cubic is not "bottlenecked by the network", it just can't fully utilize the bandwidth. I think if we use BBR along with other Cubic connectionns without reaching the bottleneck of the network, e.g. have a BBR connection with 50MB/s throughput, we can utilize more bandwidth and do no harm to the fairness. (Correct me if I'm wrong) For the code part, we've already have BBR code for testing, it may take some time for PR. And we'll consider BBRv2 of course. |
Refactor the code to support pluggable congestion control algorithm.