-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vault (Consul, HA, behind ELB) no longer works after a while #4578
Comments
Your config file has two different API addr values; the one at the top level, which is where it should be, has an address using port 8200, not 443. I'm guessing that's not the expected address through your load balancer. Additionally you have a cluster_address field in your storage configuration; that's not a valid key. You probably want cluster_addr but that should also be in the top level. Once you fix these things hopefully it will just work. |
@jefferai thanks! My loadbalancer isn't supposed to MITM TLS traffic, so the port of the Vault service should be 8200 while the ELB listens on 443 and TCP proxies the traffic to port 8200. My Vault
However, now my ELB can no longer reach Vault but my instances are still This is my understanding of how Vault should behave behind an ELB:
Am I just not getting something? Thanks! |
This is likely the issue. Vault uses two ports to run: (normally) port 8200 serves the API and UI. (Normally) port 8201 is for cluster traffic only. If you are pointing your ELB to 8201 clients can't reach the API, so you want 443 on the ELB to point to 8200, not 8201. This probably comes down to what you want going through the ELB. In general I think it's a bad idea for cluster traffic to go through the ELB and you should probably have nodes talk directly -- but in that case you need each node to advertise its address, not he ELB, in its cluster_addr setting. If you want it to go through the ELB (again, not recommended), pick a port on the ELB like 444 and have all cluster_addr values for all nodes set to that. On the API side, if you have 443 going to Vault port 8200, you really want your api_addr set to port 443. |
@jefferai thanks so much for your patience. I just ripped out the ELB and I'm now using DNS round robin to reach the Vault cluster. It's much simpler and thus less error prone and all of my issues have disappeared. 👍 |
Cool! Glad it's working. And request forwarding from standbys is there specifically to make things like this easier :-) |
Is it possible to run Vault ui on 443? |
Environment:
Vault Config File:
Startup Log Output:
Output from Node #1:
Output from Node #2:
Expected Behavior:
Vault should forward requests from the ELB to the active HA node a few minutes after being unsealed.
Actual Behavior:
Standby nodes get requests from the ELB that they cannot forward to the active node if all Vault nodes have been unsealed for more than a few minutes.
Steps to Reproduce:
Setup 3 Vault nodes using Consul as backend with the provided config.
Important Factoids:
All 3 Vault nodes are behind an AWS ELB. While this isn't atypical, it was once recommended to not place all instances behind a loadbalancer. However, I don't want to be forced to use Consul DNS for every component that should talk to Vault.
References:
none
The text was updated successfully, but these errors were encountered: