-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: support TLS proxy between clients and the kes server #18
Comments
/cc @Alevsk Is this something that may help solving deploying TLS on K8S. |
We may also need to make the forwarded header configurable. Traefik for example sets |
Suggesting the following configuration:
|
Okay so based on traefik/traefik#3826 and the nginx config option:
|
This commit adds support for TLS proxies such that there can be 0, 1 or multiple TLS proxies between one (or more) kes clients and a kes server. The identity of each proxies (direct neighbor of the kes server) must be added to `proxy.identities`. Further, the proxy which is a direct neighbor of the kes client must forward the kes client certificate as part of the request header. So, in the most simple (and most common) case there is one TLS proxy - e.g. a nginx load balancer - between the kes server and kes client. This nginx has to extract and forward the client certificate (request headers) and has to authenticate itself via its own nginx client certificate to kes as proxy. Optionally, the nginx may verify the kes client certificate as well. If there would be two load balancers then the one closer to the client has to extract the forward the client certificate while the one closer to the kes server has to authenticate itself to the kes server. This must be the case for any number ( >= 2) TLS proxies. As an example, the following nginx configuration shows how to setup a demo load balancer on your local machine: ``` http { server { listen 443 ssl; server_name localhost; # Avoid buffering the response. Especially for audit log tracing. proxy_buffering off; # The nginx private key and certificate presented to clients # connecting to it. ssl_certificate /home/andreas/nginx.crt; ssl_certificate_key /home/andreas/nginx.key; # Require a client certificate but don't verify it. # See nginx docs for more details. ssl_verify_client optional_no_ca; location / { # KES server endpoint proxy_pass https://localhost:7373; # Enforce TLSv1.3 - KES supports it. proxy_ssl_protocols TLSv1.3; # This demo config assumes the KES server uses # self-signed certificates. proxy_ssl_verify off; # The private key and client certificate used by nginx # to authenticate itself to the kes server # The output of: 'kes tool identity of <nginx-client.cert>' # must be added to the proxy.identities section in the # kes config file. proxy_ssl_certificate <nginx-client.cert>; proxy_ssl_certificate_key <nginx-client.key>; # The header used by nginx to forward the escaped and # encoded certificate of the KES client. This value # must match the entry in the tls.proxy.header section # in the KES config file. proxy_set_header X-Tls-Client-Cert $ssl_client_escaped_cert; # We need keep-alive for audit log tracing proxy_http_version 1.1; } } } ``` Fixes #18
This commit adds support for TLS proxies such that there can be 0, 1 or multiple TLS proxies between one (or more) kes clients and a kes server. The identity of each proxies (direct neighbor of the kes server) must be added to `proxy.identities`. Further, the proxy which is a direct neighbor of the kes client must forward the kes client certificate as part of the request header. So, in the most simple (and most common) case there is one TLS proxy - e.g. a nginx load balancer - between the kes server and kes client. This nginx has to extract and forward the client certificate (request headers) and has to authenticate itself via its own nginx client certificate to kes as proxy. Optionally, the nginx may verify the kes client certificate as well. If there would be two load balancers then the one closer to the client has to extract the forward the client certificate while the one closer to the kes server has to authenticate itself to the kes server. This must be the case for any number ( >= 2) TLS proxies. As an example, the following nginx configuration shows how to setup a demo load balancer on your local machine: ``` http { server { listen 443 ssl; server_name localhost; # Avoid buffering the response. Especially for audit log tracing. proxy_buffering off; # The nginx private key and certificate presented to clients # connecting to it. ssl_certificate /home/andreas/nginx.crt; ssl_certificate_key /home/andreas/nginx.key; # Require a client certificate but don't verify it. # See nginx docs for more details. ssl_verify_client optional_no_ca; location / { # KES server endpoint proxy_pass https://localhost:7373; # Enforce TLSv1.3 - KES supports it. proxy_ssl_protocols TLSv1.3; # This demo config assumes the KES server uses # self-signed certificates. proxy_ssl_verify off; # The private key and client certificate used by nginx # to authenticate itself to the kes server # The output of: 'kes tool identity of <nginx-client.cert>' # must be added to the proxy.identities section in the # kes config file. proxy_ssl_certificate <nginx-client.cert>; proxy_ssl_certificate_key <nginx-client.key>; # The header used by nginx to forward the escaped and # encoded certificate of the KES client. This value # must match the entry in the tls.proxy.header section # in the KES config file. proxy_set_header X-Tls-Client-Cert $ssl_client_escaped_cert; # We need keep-alive for audit log tracing proxy_http_version 1.1; } } } ``` Fixes #18
This commit adds support for TLS proxies such that there can be 0, 1 or multiple TLS proxies between one (or more) kes clients and a kes server. The identity of each proxies (direct neighbor of the kes server) must be added to `proxy.identities`. Further, the proxy which is a direct neighbor of the kes client must forward the kes client certificate as part of the request header. So, in the most simple (and most common) case there is one TLS proxy - e.g. a nginx load balancer - between the kes server and kes client. This nginx has to extract and forward the client certificate (request headers) and has to authenticate itself via its own nginx client certificate to kes as proxy. Optionally, the nginx may verify the kes client certificate as well. If there would be two load balancers then the one closer to the client has to extract the forward the client certificate while the one closer to the kes server has to authenticate itself to the kes server. This must be the case for any number ( >= 2) TLS proxies. As an example, the following nginx configuration shows how to setup a demo load balancer on your local machine: ``` http { server { listen 443 ssl; server_name localhost; # Avoid buffering the response. Especially for audit log tracing. proxy_buffering off; # The nginx private key and certificate presented to clients # connecting to it. ssl_certificate /home/andreas/nginx.crt; ssl_certificate_key /home/andreas/nginx.key; # Require a client certificate but don't verify it. # See nginx docs for more details. ssl_verify_client optional_no_ca; location / { # KES server endpoint proxy_pass https://localhost:7373; # Enforce TLSv1.3 - KES supports it. proxy_ssl_protocols TLSv1.3; # This demo config assumes the KES server uses # self-signed certificates. proxy_ssl_verify off; # The private key and client certificate used by nginx # to authenticate itself to the kes server # The output of: 'kes tool identity of <nginx-client.cert>' # must be added to the proxy.identities section in the # kes config file. proxy_ssl_certificate <nginx-client.cert>; proxy_ssl_certificate_key <nginx-client.key>; # The header used by nginx to forward the escaped and # encoded certificate of the KES client. This value # must match the entry in the tls.proxy.header section # in the KES config file. proxy_set_header X-Tls-Client-Cert $ssl_client_escaped_cert; # We need keep-alive for audit log tracing proxy_http_version 1.1; } } } ``` Fixes #18
What is the problem you want to solve?
Currently it's not possible to put a TLS proxy - e.g. a nginx as load balancer - between the kes server and its clients. The reason is that the kes server uses mTLS for client authentication and determines the policy that should be applied based on the X.509 certificate of the client.
Now, when putting a TLS proxy in between kes and its clients, kes cannot "see" behind the proxy. So, for KES it looks like each request is made by the proxy. All information about the actual client is not accessible to KES.
How do you want to solve it?
One possible solution would be to instruct the TLS reverse proxy to forward the client certificate as an HTTP header. So the TLS proxy would do the client-mTLS termination (now the TLS proxy is responsible to verify whether the client certificate is authentic - i.e. signed by a trusted CA), and then, establishes another TLS connection to kes. As part of the request the TLS proxy sends a "special HTTP header" containing the client certificate. So, the KES server verifies that the connection attempt actually has been made by the TLS proxy, then extracts the certificate of the client from the "special header" and applies the policy roles to this certificate.
This would require some additional TLS configuration. For example:
The TLS proxy configuration contains X.509 identities of all TLS proxies that are allowed to verify clients and can forward their certificate to KES.
Additional context
Are there alternative solutions?
Not really, however they may be many variations of the solution above. For example, should there be identities or just a separate CA-chain for proxies a.s.o.
Would your solution cause a major breaking API change?
No
Anything else that is important?
There are some things that needs decision. In particular,
One critical aspect is that the reverse proxy can act as any identity - since it just can send arbitrary certificates. The kes server has no (direct) way to verify that there is actually a client with this identity making a request.
To mitigate a "malicious" reverse proxy we would need to require that the proxy forwards (parts) of the client handshake. This effectively means doing a TLS pseudo-handshake with the information forwarded by the client. However, I'm not sure whether this works at all (with the Go TLS implementation).
The text was updated successfully, but these errors were encountered: