-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add keepalive option to agent server #154
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @Avatat! |
Hi @Avatat. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I signed CLA. |
/ok-to-test |
/retest |
cmd/server/main.go
Outdated
grpcServer := grpc.NewServer(serverOption) | ||
serverOptions := []grpc.ServerOption { | ||
grpc.Creds(credentials.NewTLS(tlsConfig)), | ||
grpc.KeepaliveParams(keepalive.ServerParameters{Time: 30 * time.Second}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is way too short to be the default. Currently the max K8s cluster size is 15K nodes. For such a cluster you would be sending 500 keep-alive packets per second to each server. I would suggest making the default 1 hour (the default TCP keepalive on the server is 2 hours (so 1/2 that is good) and the default server enforced minimum is 5 minutes). I'm fine with this being a optional command line flag so you can tune this to your needs. I think we are still likely to also want to mark problematic tunnels as unhealthy but having both seems a good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we also want to set a keep alive on the agent side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding an optional command line flag is a good idea - I will figure it out.
As I understand, we don't need a keepalive mechanism on the agent side, because we currently check health there, using grpc.ClientConn.GetState.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe (I could certainly be wrong) that that GetState is mostly tracking if closed has been called. It is not tracking if we have either a network outage such that traffic doesn't route or that the server has gone away without notifying us. These are the conditions which keepalive is meant to help. https://godoc.org/google.golang.org/grpc#WithKeepaliveParams client side keep alive is almost identical to server side keepalive and they should work well together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just going to add that SSH tunnels have a healthCheckPoll time of 1 minute (https://github.com/kubernetes/kubernetes/blob/master/pkg/ssh/ssh.go#L316), so it's possible that we could go lower than 1 hour for the keepalive time.
We can probably address client side keep alive in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so, too. ~1 minute keepalive time isn't something bad for performance - remember, that we send keepalive packets only when the channel is inactive for longer than specified time. In a 15K-node cluster, administrators probably will not deploy konnectivity as daemonset, on every node.
I'm currently working on adding a command line option to allow the administrator to tune this parameter.
After I complete this PR, I would like to try to add keepalive to the agent and then create a new PR :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a fix to issue #152. Sometimes, agent can disconnect without closing connection, and server don't know about it. Disconnected agent is still on the available agents list and server tries to route traffic through it.
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Avatat, cheftako The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is a fix to issue #152.
Sometimes, agent can disconnect without closing connection, and server don't know about it. Disconnected agent is still on the available agents list and server tries to route traffic through it.