Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Auth Multiple Hosts instead of a single one (HA Control Plane) #5408

Closed
mitchellmaler opened this issue Sep 26, 2018 · 8 comments
Closed

Comments

@mitchellmaler
Copy link

Is your feature request related to a problem? Please describe.

Would it be possible to provide more than one kubernetes host to the kubernetes auth host value? We currently run kubernetes with 3 master nodes and would like the auth to try to use the api on one of those which would direct to the master anyways.

Describe the solution you'd like
Allow providing more than one host in the hosts variable and the engine try each one before failing if the connection is refused or has issues.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

The alternative for me is to setup a load balancer just for vault in front of the api servers which we currently use rancher to manage our clusters, auth, and load balancing/proxying requests to the api servers. Currently, rancher does not support jwt passthrough so my option is to expose the api servers but lock it down to only rancher server and vault.

@catsby
Copy link
Contributor

catsby commented Nov 8, 2018

Hey @mitchellmaler ! Could you give me some more information here? I'm have some questions:

  • I'm a bit unfamiliar with multiple master nodes in Kubernetes, do they each serve requests or are some simply reserved for failover? (Sounds like just failover from your description)
  • Assuming that's the case, the idea behind having multiple is such that if a/the "master" goes down, we'll try the others. Reason being that a new "master" may be elected and we don't want users to have to re-configure the kubernetes auth configuration with the new primary to get it working?
  • "try to use the api on one of those which would direct to the master anyways." > do you mean client side redirect here (302, et. al), or do these other masters do this forwarding/redirecting internally (unbeknownst to Vault)?
  • "try each one before failing if the connection is refused or has issues" > are you imaging those attempts only being made if the prior attempt is a 302 or 5xx type error, or would you expect a retry if one server returned an unauthorized response?

Hopefully those are coherent questions 😄 if you could let me know, I'd greatly appreciate it!

@mitchellmaler
Copy link
Author

mitchellmaler commented Nov 8, 2018

  • The Kubernetes API servers elect a master control node which is the one which serves requests.
  • Correct, if one goes down and it was the leader another would be elected (using etcd raft)
  • Each control node (master) will internally forward the request to the leader. I could point to either one of the 3 (or possible if we had 5) and all will serve the request but internally route to the leader.
  • I would expect it to retry (try the next endpoint) on a 500, cert invalid, or connection refused (server down, etc.). If an auth error occurs it means the token is bad and trying the others would most likely have the same issue.

@shubb30
Copy link

shubb30 commented Jan 17, 2019

+1 for this. This is essential if you plan to use Vault Kubernetes Auth with a production Kubernetes cluster.

@raoofm
Copy link
Contributor

raoofm commented Nov 8, 2019

@mitchellmaler isn't this normally achieved via a service/LB in front of n nodes in the cluster?

@codayblue
Copy link

I currently am interested in this functionality as well. I can attempt implementing it my self if some one could give me a little direction on where I should start looking and working.

@m1kola
Copy link

m1kola commented Jun 8, 2020

While I understand the need for a HA control plane support, I believe that adding support for multiple hosts into Vault is not the way to solve the HA challenge.

HA control plane requires having a load balancer in front of API server. Without a load balancer you will have to implement a client-side load balancing on each API server consumer and you will have to keep the list of hosts updated for each consumer (Vault, Kubernetes compoents and basically anything that uses Kubernetes API).

I think the solution here is:

  • Put a TCP load balancer in front of an API server;
  • Provide Vault with an URL which is pointing to a load balancer.

This doc describes options for HA topology. It's a good starting point, but details will depend on your setup/needs.

@catsby with high degree of confidence I think that there is no action required from Vault's side and the issue can be resolved, but could you please do a sanity check (I'm new to Vault)?

@mitchellmaler
Copy link
Author

I agree with this. When I logged the issue it was for Rancher clusters which automatically did the control plane balancing without the need to put one in front but you couldn’t point vault to that. Now they have support for pass through support and also I could have created my own LB in front as well. If you agree we can go ahead and close this.

@catsby
Copy link
Contributor

catsby commented Jun 9, 2020

I agree as well, thank you for the detailed write up @m1kola and thank you @mitchellmaler for following up! I'm going to close this for now. Cheers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants