-
Notifications
You must be signed in to change notification settings - Fork 961
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Swarm does not honour max_negotiating_inbound_streams
setting
#3041
Comments
Another thought: this loop can be kept busy for arbitrarily long times by a potent network connection, starving other activities. It should return |
Note that The former limits the amount of inbound (non-negotiated) streams, the latter limits the amount of negotiating inbound streams, i.e. the streams that are running multistream-select + the |
I am not sure how that would work. Are you thinking of a timer that fires at some point and interrupts the loop? |
Cross-referencing related efforts: |
@mxinden Thinking more about it, libp2p 0.49 broke network protocol compatibility with earlier versions (which is the concrete and acute problem I have with libp2p-bitswap): file sync now fails because old clients will interpret dropped substreams as “that peer doesn’t have what we want”. Plus I have trouble figuring out how new clients should behave differently, but that is a separate issue. Regarding this issue’s title: I think it is correct, because there is only one setting in the swarm regulating inbound substreams, and it currently doesn’t do anything useful. When I set that setting to 1000 I expect to be able to open 1000 substreams (i.e. fire 1000 requests) in a burst and not lose a single one, which worked in 0.48 but doesn’t work any longer. @thomaseizinger In general yes, I’d use In this particular case the problem is that So in this case, since this poll is within a larger loop, that loop needs to be broken at regular intervals lest a single client monopolise a thread from the connection thread pool. |
Well, the fact that this worked was more of a bug than a feature. Not having any form of back-pressure doesn't work large scale. It also means we wouldn't be able to make use of QUIC's back-pressure mechanism for the number of streams. I have a proposal open for improving this situation: #2878 It deprecates
That will violate the Moving forward with libp2p/rust-yamux#142 should help here too I think. We should be able to provide a better interface there once all the |
That’s why I mentioned May I ask that we separate two things?
|
Are you suggesting to immediately wake and thus schedule another call to It is a bit hacky but should work if we have another call in the same |
This is a common pattern in my parts of the woods: relinquish the thread to give another task its chance to run, while stating that we’re not done just yet. Rust’s approach to asynchrony needs this extra care due to the inverted control flow, whereas other runtimes (JS, Java, …) schedule continuations as new tasks and thus get these break points automatically. |
rust-libp2p/muxers/yamux/src/lib.rs
Lines 142 to 152 in 4d4833f
With
MAX_BUFFERED_INBOUND_STREAMS == 25
this code places a limit of 25 incoming substream requests at any given time, effectively removing the utility of SwarmBuilder’smax_negotiating_inbound_streams
setting. The muxer should be refactored such that only the configurable setting is used and the constant removed.In combination with #3039 this means that currently it is very difficult to implement a working bitswap implementation.
The text was updated successfully, but these errors were encountered: