-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resolve open Peer Starving TODO
#6344
Comments
This TODO is about the prioritisation of a single peer's messages:
This is what we do in the existing implementation: requests from different peers are processed concurrently, multiple requests from the same peer are processed in order.
In the existing implementation, if a peer does this, it will fill its individual message pipeline, and its keepalive or other Messages to other peers will continue to be processed concurrently, but individual services might be delayed slightly while the peer times out, if a request to that peer is sent by the service, and the service blocks until the request completes (or times out). |
I think the current behaviour is what we want, and we should update the TODO comment to say that. The reverse behaviour would be worse, because a peer sending an endless stream of messages would never time out its keep alive task. |
There's also another reason to prefer the current implementation: If an inbound peer message arrives at a ready peer that is just being sent a request from Zebra, we want to process the peer's message first. If we process the Zebra request first:
|
Sounds like the fix for this is to just update the comment to remove the |
@teor2345 can you please add a size estimate for this issue? |
Details
The following open
TODO
item from src/peer/connection.rs is noted:zebra/zebra-network/src/peer/connection.rs
Lines 598 to 613 in a4cb835
This would seem to be an important item to prioritize for resolution. While NCC Group concurs that it is unlikely in practice, it nevertheless could compound with other factors to strengthen network-level attacks. It is further noted that in the case of attackers with extremely influential positions on the network, mitigating factors such as network latency
may have less of a mitigating influence than originally expected.
It is noted that if the great majority of outstanding messages originate from attackers, then randomly chosen messages will likely be attacker-sent, and will still be prioritized over honest ones; it may be preferable to respond to requests in the order they are received, in order to ensure all requests are (eventually) answered. This may result in worse behavior in
the case where the incoming message rate exceeds the message processing rate and a backlog develops (since in this case response times would be monotonically increasing); however, the solution to this is likely not random choice but rather rate-limiting.
Resolution
The text was updated successfully, but these errors were encountered: