-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative communication protocol between server and agent? #146
Comments
Or, maybe we can add gRPC compressor to reduce the bandwidth? |
+1 |
@cheftako Thanks for the reply. Maybe we can have something like the following? gRPC +------------+ gRPC +-----------+ TCP +-------+
+---------->+proxy|server+--------->+proxy|agent+------->+kubelet|
| +------------+ +-----------+ +-------+
|
+----+----+
|apiserver|
+----+----+
|
| +------------+ +-----------+ +-------+
+---------->+proxy|server+--------->+proxy|agent+------->+kubelet|
HTTP +------------+ TCP +-----------+ TCP +-------+ Just curious, is there any particular reason to choose gRPC over TCP at the beginning? |
Fundamentally, for multiplexing , TCP is a bad choice due to head of line problems which can affect all streams in the single TCP connection. I've played around with quic and the native support for streams in the protocol make the implementation of the server and agent quite trivial. See this repo - https://github.com/mvladev/quic-reverse-http-tunnel:
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Currently, communication tunnel between agent and server is based on gRPC. Before using ANP, I used to set up the tunnel using TCP. I did a small benchmark to compare the performance of using gRPC (i.e., ANP) over TCP. As expect, using gRPC introduces some extra overheads.
Run
kubectl exec test-po -- date
100 times thorugh gRPC tunnelRun
kubectl exec test-po -- date
100 times through TCP tunnelThere're cases where tunnel may need to go through the public network, and lowering the overhead can help the user to save some costs. Should we consider having an alternate to gRPC?
The text was updated successfully, but these errors were encountered: