You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description:
Setting require_tls: ALL on a virtual_host filter results in a 301 response regardless of calling scheme.
Repro steps:
I am trying to creating a scenario where Envoy proxies both inbound and outbound traffic serving as both tls termination and origination. This gives me something like:
service A <- http -> envoy A <---- https ----> envoy B <- http -> service B
envoy A and envoy B and both configured with a listener on port 443 that has a tls_context defined and require_tls: ALL set on their virtual host. They both contain a second listener which is used to proxy outbound requests from their underlying service, these reference a cluster with host configured as the "other" envoy instance (port 443) with a tls_context specified.
I can make requests through Envoy via https fine. When I make a request from service A to service B via their corresponding envoy's I get back a 301 redirect rather than the actual response. From looking at a tcp dump and the envoy logs it appears as if envoy A is calling envoy B over https as expected. If I remove the require_tls from the virtual host definition the request is processed as expected.
I'm pretty sure the require_tls fails because the x-forwarded-proto indicates http. TBH, I find all the info around how to configure for XFF, use_remote_address etc. very complicated, so not surprised users are hitting this issues when putting together a service mesh. @lizan@dio as our resident Istio experts, can you comment on how require_tls works there, where you have a similar HTTP hop from app to sidecar proxy and then mTLS between proxies?
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.
stalebot
added
the
stale
stalebot believes this issue/PR has not been touched recently
label
Mar 6, 2019
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.
Description:
Setting require_tls: ALL on a virtual_host filter results in a 301 response regardless of calling scheme.
Repro steps:
I am trying to creating a scenario where Envoy proxies both inbound and outbound traffic serving as both tls termination and origination. This gives me something like:
service A <- http -> envoy A <---- https ----> envoy B <- http -> service B
envoy A and envoy B and both configured with a listener on port 443 that has a tls_context defined and require_tls: ALL set on their virtual host. They both contain a second listener which is used to proxy outbound requests from their underlying service, these reference a cluster with host configured as the "other" envoy instance (port 443) with a tls_context specified.
I can make requests through Envoy via https fine. When I make a request from service A to service B via their corresponding envoy's I get back a 301 redirect rather than the actual response. From looking at a tcp dump and the envoy logs it appears as if envoy A is calling envoy B over https as expected. If I remove the require_tls from the virtual host definition the request is processed as expected.
Config:
Envoy A Config
Envoy B Config
static_resources:
listeners:
socket_address:
address: 0.0.0.0
port_value: 1443
filter_chains:
config:
codec_type: auto
stat_prefix: ingress_http
generate_request_id: true
tracing:
operation_name: INGRESS
request_headers_for_tags: mainegress
route_config:
name: local_route
virtual_hosts:
- name: local_hosts
require_tls: ALL
domains:
- "*"
routes:
- match:
prefix: "/serviceB"
route:
cluster: serviceB
auto_host_rewrite: true
http_filters:
config: {}
access_log:
name: envoy.file_access_log
config:
path: "/dev/stdout"
format: 'INGRESS [%START_TIME%] serverName=%REQUESTED_SERVER_NAME% method=%REQ(:METHOD)%
path=%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% protocol=%PROTOCOL% proto=%REQ(X-FORWARDED-PROTO)%
duration=%DURATION% requestid=%REQ(X-REQUEST-ID)% upstreamhost=%UPSTREAM_HOST%
upstreamlocaladdress=%UPSTREAM_LOCAL_ADDRESS% downstreamremoteaddress=%DOWNSTREAM_REMOTE_ADDRESS%
downstreamlocaladdress=%DOWNSTREAM_LOCAL_ADDRESS% forwardedfor=%REQ(X-FORWARDED-FOR)%
useragent="%REQ(USER-AGENT)%" responsesize=%BYTES_SENT% responsecode=%RESPONSE_CODE%
'
tls_context:
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/etc/server_certificate.pem"
private_key:
filename: "/etc/server_key.pem"
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
config:
codec_type: auto
stat_prefix: egress_http
route_config:
name: egress_route
virtual_hosts:
- name: egress_hosts
domains:
- "*"
routes:
- match:
prefix: "/serviceA"
route:
cluster: envoyA
host_rewrite: envoyA
- match:
prefix: "/serviceB"
route:
cluster: envoyB
host_rewrite: envoyB
- match:
prefix: "/"
route:
cluster: service_google
host_rewrite: www.google.com
http_filters:
config: {}
access_log:
name: envoy.file_access_log
config:
path: "/dev/stdout"
format: 'EGRESS [%START_TIME%] serverName=%REQUESTED_SERVER_NAME% method=%REQ(:METHOD)%
path=%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% protocol=%PROTOCOL% proto=%REQ(X-FORWARDED-PROTO)%
duration=%DURATION% requestid=%REQ(X-REQUEST-ID)% upstreamhost=%UPSTREAM_HOST%
upstreamlocaladdress=%UPSTREAM_LOCAL_ADDRESS% downstreamremoteaddress=%DOWNSTREAM_REMOTE_ADDRESS%
downstreamlocaladdress=%DOWNSTREAM_LOCAL_ADDRESS% forwardedfor=%REQ(X-FORWARDED-FOR)%
useragent="%REQ(USER-AGENT)%" responsesize=%BYTES_SENT% responsecode=%RESPONSE_CODE%
'
clusters:
connect_timeout: 2s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: round_robin
hosts:
address: envoyA
port_value: 1443
tls_context:
sni: envoyA
common_tls_context:
validation_context:
trusted_ca:
filename: usr/local/share/ca-certificates/ca_certificate.pem
http2_protocol_options: {}
connect_timeout: 2s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: round_robin
hosts:
address: envoyB
port_value: 1443
tls_context:
sni: envoyB
common_tls_context:
validation_context:
trusted_ca:
filename: usr/local/share/ca-certificates/ca_certificate.pem
http2_protocol_options: {}
connect_timeout: 2s
type: LOGICAL_DNS
lb_policy: round_robin
hosts:
address: serviceB
port_value: 5000
connect_timeout: 2s
type: LOGICAL_DNS
lb_policy: round_robin
dns_lookup_family: V4_ONLY
hosts:
address: google.com
port_value: 443
tls_context:
sni: www.google.com
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8081
Logs:
envoyA_1 | [2019-02-04 20:51:14.810][20][debug][main] [source/server/connection_handler_impl.cc:257] [C0] new connection
envoyA_1 | [2019-02-04 20:51:14.811][20][debug][http] [source/common/http/conn_manager_impl.cc:210] [C0] new stream
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][http] [source/common/http/conn_manager_impl.cc:548] [C0][S4142459396270541510] request headers complete (end_stream=true):
envoyA_1 | ':authority', 'envoyA:8080'
envoyA_1 | ':path', '/serviceB/info'
envoyA_1 | ':method', 'GET'
envoyA_1 | 'user-agent', 'curl/7.58.0'
envoyA_1 | 'accept', '/'
envoyA_1 |
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][http] [source/common/http/conn_manager_impl.cc:991] [C0][S4142459396270541510] request end stream
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][router] [source/common/router/router.cc:320] [C0][S4142459396270541510] cluster 'envoyB' match for URL '/serviceB/info'
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][router] [source/common/router/router.cc:381] [C0][S4142459396270541510] router decoding headers:
envoyA_1 | ':authority', 'envoyB'
envoyA_1 | ':path', '/serviceB/info'
envoyA_1 | ':method', 'GET'
envoyA_1 | ':scheme', 'https'
envoyA_1 | 'user-agent', 'curl/7.58.0'
envoyA_1 | 'accept', '/'
envoyA_1 | 'x-forwarded-proto', 'http'
envoyA_1 | 'x-request-id', '79ac3d71-6662-4842-8138-c5d914f9aca0'
envoyA_1 | 'x-envoy-expected-rq-timeout-ms', '15000'
envoyA_1 |
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][client] [source/common/http/codec_client.cc:26] [C1] connecting
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][connection] [source/common/network/connection_impl.cc:638] [C1] connecting to 172.31.0.6:1443
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][connection] [source/common/network/connection_impl.cc:647] [C1] connection in progress
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][http2] [source/common/http/http2/codec_impl.cc:721] [C1] setting stream-level initial window size to 268435456
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][http2] [source/common/http/http2/codec_impl.cc:743] [C1] updating connection-level initial window size to 268435456
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][pool] [source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
envoyA_1 | [2019-02-04 20:51:14.812][20][debug][connection] [source/common/network/connection_impl.cc:516] [C1] connected
envoyA_1 | [2019-02-04 20:51:14.813][20][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:138] [C1] handshake error: 2
envoyB_1 | [2019-02-04 20:51:14.812][27][debug][main] [source/server/connection_handler_impl.cc:257] [C0] new connection
envoyB_1 | [2019-02-04 20:51:14.815][27][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:138] [C0] handshake error: 2
envoyB_1 | [2019-02-04 20:51:14.815][27][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:138] [C0] handshake error: 2
envoyA_1 | [2019-02-04 20:51:14.817][20][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:138] [C1] handshake error: 2
envoyA_1 | [2019-02-04 20:51:14.817][20][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:138] [C1] handshake error: 2
envoyB_1 | [2019-02-04 20:51:14.818][27][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:127] [C0] handshake complete
envoyA_1 | [2019-02-04 20:51:14.818][20][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:127] [C1] handshake complete
envoyA_1 | [2019-02-04 20:51:14.818][20][debug][client] [source/common/http/codec_client.cc:64] [C1] connected
envoyA_1 | [2019-02-04 20:51:14.818][20][debug][pool] [source/common/http/http2/conn_pool.cc:83] [C1] creating stream
envoyA_1 | [2019-02-04 20:51:14.818][20][debug][router] [source/common/router/router.cc:1128] [C0][S4142459396270541510] pool ready
envoyB_1 | [2019-02-04 20:51:14.819][27][debug][http2] [source/common/http/http2/codec_impl.cc:721] [C0] setting stream-level initial window size to 268435456
envoyB_1 | [2019-02-04 20:51:14.819][27][debug][http2] [source/common/http/http2/codec_impl.cc:743] [C0] updating connection-level initial window size to 268435456
envoyB_1 | [2019-02-04 20:51:14.819][27][debug][http] [source/common/http/conn_manager_impl.cc:210] [C0] new stream
envoyA_1 | [2019-02-04 20:51:14.821][20][debug][client] [source/common/http/codec_client.cc:95] [C1] response complete
envoyA_1 | [2019-02-04 20:51:14.821][20][debug][pool] [source/common/http/http2/conn_pool.cc:222] [C1] destroying stream: 0 remaining
envoyA_1 | [2019-02-04 20:51:14.821][20][debug][router] [source/common/router/router.cc:675] [C0][S4142459396270541510] upstream headers complete: end_stream=true
envoyB_1 | INGRESS [2019-02-04T20:51:14.819Z] serverName=- method=GET path=/serviceB/info protocol=HTTP/2 proto=http duration=1 requestid=79ac3d71-6662-9842-8138-c5d914f9aca0 upstreamhost=- upstreamlocaladdress=- downstreamremoteaddress=172.31.0.5:51142 downstreamlocaladdress=172.31.0.6:1443 forwardedfor=- useragent="curl/7.58.0" responsesize=0 responsecode=301
envoyB_1 | [2019-02-04 20:51:14.820][27][debug][http] [source/common/http/conn_manager_impl.cc:548] [C0][S9649525102360604223] request headers complete (end_stream=true):
envoyB_1 | ':authority', 'envoyB'
envoyB_1 | ':path', '/serviceB/info'
envoyB_1 | ':method', 'GET'
envoyB_1 | ':scheme', 'https'
envoyB_1 | 'user-agent', 'curl/7.58.0'
envoyB_1 | 'accept', '/'
envoyB_1 | 'x-forwarded-proto', 'http'
envoyB_1 | 'x-request-id', '79ac3d71-6662-4842-8138-c5d914f9aca0'
envoyB_1 | 'x-envoy-expected-rq-timeout-ms', '15000'
envoyB_1 |
envoyB_1 | [2019-02-04 20:51:14.820][27][debug][http] [source/common/http/conn_manager_impl.cc:991] [C0][S9649525102360604223] request end stream
envoyB_1 | [2019-02-04 20:51:14.820][27][debug][http] [source/common/http/conn_manager_impl.cc:1226] [C0][S9649525102360604223] encoding headers via codec (end_stream=true):
envoyB_1 | ':status', '301'
envoyB_1 | 'location', 'https://envoyB/serviceB/info'
envoyB_1 | 'date', 'Mon, 04 Feb 2019 20:51:14 GMT'
envoyB_1 | 'server', 'envoy'
envoyB_1 |
envoyB_1 | [2019-02-04 20:51:14.820][27][debug][http2] [source/common/http/http2/codec_impl.cc:563] [C0] stream closed: 0
envoyA_1 | [2019-02-04 20:51:14.821][20][debug][http] [source/common/http/conn_manager_impl.cc:1226] [C0][S4142459396270541510] encoding headers via codec (end_stream=true):
envoyA_1 | ':status', '301'
envoyA_1 | 'location', 'https://envoyB/serviceB/info'
envoyA_1 | 'date', 'Mon, 04 Feb 2019 20:51:14 GMT'
envoyA_1 | 'server', 'envoy'
envoyA_1 | 'x-envoy-upstream-service-time', '8'
envoyA_1 |
envoyA_1 | [2019-02-04 20:51:14.821][20][debug][http2] [source/common/http/http2/codec_impl.cc:563] [C1] stream closed: 0
envoyA_1 | EGRESS [2019-02-04T20:51:14.811Z] serverName=- method=GET path=/serviceB/info protocol=HTTP/1.1 proto=http duration=10 requestid=79ac3d71-6662-4842-8138-c5d914f9aca0 upstreamhost=172.31.0.6:1443 upstreamlocaladdress=- downstreamremoteaddress=172.31.0.2:47798 downstreamlocaladdress=172.31.0.5:8080 forwardedfor=- useragent="curl/7.58.0" responsesize=0 responsecode=301
envoyA_1 | [2019-02-04 20:51:14.822][20][debug][connection] [source/common/network/connection_impl.cc:501] [C0] remote close
envoyA_1 | [2019-02-04 20:51:14.822][20][debug][connection] [source/common/network/connection_impl.cc:183] [C0] closing socket: 0
envoyA_1 | [2019-02-04 20:51:14.822][20][debug][main] [source/server/connection_handler_impl.cc:68] [C0] adding to cleanup list
The text was updated successfully, but these errors were encountered: