Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UPSTREAM_TRANSPORT_FAILURE_REASON not populated when tls handshake fails with unsupported cipher suites #16991

Closed
jstewmon opened this issue Jun 15, 2021 · 11 comments
Labels
bug investigate Potential bug that needs verification stale stalebot believes this issue/PR has not been touched recently

Comments

@jstewmon
Copy link
Contributor

jstewmon commented Jun 15, 2021

Title: UPSTREAM_TRANSPORT_FAILURE_REASON not populated when tls handshake fails with unsupported cipher suites

Description:

We deployed an updated envoy image yesterday, including a version change from 1.16 to 1.18. Following the deployment, envoy started issuing local 503 replies with RESPONSE_CODE_DETAILS of upstream_reset_before_response_started{connection_failure}.

I added upstreamTransportFailureReason: '%UPSTREAM_TRANSPORT_FAILURE_REASON%' to the access logs (json format), but the value was null (formatted for readability):

{
  "resHeaderXEnvoyRatelimited": null,
  "reqHeaderXClientTraceId": null,
  "resBytes": 91,
  "routeName": "default",
  "reqIpTags": null,
  "reqAuthority": "<redacted>",
  "time": "1623785661913",
  "reqId": "58f7854d-a080-418e-b201-5704ea00dd3a",
  "resStatusCode": 503,
  "reqHeaderUserAgent": "curl/7.64.1",
  "timingUpstream": null,
  "upstreamTransportFailureReason": null,
  "resStatusCodeDetails": "upstream_reset_before_response_started{connection_failure}",
  "downstreamRemoteAddress": "172.25.0.1",
  "reqMethod": "GET",
  "resHeaderXEnvoyOverloaded": null,
  "reqBytes": 0,
  "timingDuration": 50,
  "upstreamHost": "172.26.83.124:30143",
  "upstreamCluster": "<name>",
  "reqHeaderXForwardedFor": "172.25.0.1",
  "reqPath": "/",
  "resFlags": "UF",
  "reqHeaderXClntReqUuid": null,
  "reqProtocol": "HTTP/1.1",
  "resHeaderContentType": "text/plain"
}

I set the envoy log-level to trace which produced the following clues:

[2021-06-15 02:55:09.020][40][trace][http] [source/common/http/http1/codec_impl.cc:604] [C0] parsed 111 bytes
[2021-06-15 02:55:09.020][40][trace][connection] [source/common/network/connection_impl.cc:547] [C0] socket event: 2
[2021-06-15 02:55:09.020][40][trace][connection] [source/common/network/connection_impl.cc:656] [C0] write ready
[2021-06-15 02:55:09.044][40][trace][connection] [source/common/network/connection_impl.cc:547] [C1] socket event: 2
[2021-06-15 02:55:09.044][40][trace][connection] [source/common/network/connection_impl.cc:656] [C1] write ready
[2021-06-15 02:55:09.044][40][debug][connection] [source/common/network/connection_impl.cc:665] [C1] connected
[2021-06-15 02:55:09.044][40][trace][connection] [source/extensions/transport_sockets/tls/ssl_handshaker.cc:237] [C1] ssl error occurred while read: WANT_READ
[2021-06-15 02:55:09.060][40][trace][connection] [source/common/network/connection_impl.cc:547] [C1] socket event: 3
[2021-06-15 02:55:09.061][40][trace][connection] [source/common/network/connection_impl.cc:656] [C1] write ready
[2021-06-15 02:55:09.061][40][trace][connection] [source/extensions/transport_sockets/tls/ssl_handshaker.cc:237] [C1] ssl error occurred while read: SYSCALL
[2021-06-15 02:55:09.061][40][debug][connection] [source/common/network/connection_impl.cc:243] [C1] closing socket: 0
[2021-06-15 02:55:09.061][40][trace][connection] [source/common/network/connection_impl.cc:410] [C1] raising connection event 0
[2021-06-15 02:55:09.061][40][debug][client] [source/common/http/codec_client.cc:101] [C1] disconnect. resetting 0 pending requests
[2021-06-15 02:55:09.061][40][debug][pool] [source/common/conn_pool/conn_pool_base.cc:393] [C1] client disconnected, failure reason:
[2021-06-15 02:55:09.061][40][debug][router] [source/common/router/router.cc:1091] [C0][S2192706120000029174] upstream reset: reset reason: connection failure, transport failure reason:
[2021-06-15 02:55:09.061][40][debug][http] [source/common/http/filter_manager.cc:883] [C0][S2192706120000029174] Sending local reply with details upstream_reset_before_response_started{connection failure}

I noticed that there is a debug message in the log which also appears to be missing a value upstream reset: reset reason: connection failure, transport failure reason:

It turns out that the problem was that envoy 1.17 included the removal of weak cipher suites in these issues (ref: #5401) and the upstream had an old list of allowed ciphers, which were all weak 🤦

I figured this out by searching for "tls" in the version history of the major version releases, finding the relevant PRs, etc.

I would have expected some stat, debug log or the UPSTREAM_TRANSPORT_FAILURE_REASON to indicate something like "TLS handshake failure: no supported ciphers".

Repro steps:
A TLS upstream with no supported ciphers. In my case the upstream had the following cipher suites enabled:

TLS_DHE_DSS_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA

Admin and Stats Output:

These are all the stats after making 3 requests which route to the upstream with unsupported ciphers:

cluster.<name>.circuit_breakers.default.cx_open: 0
cluster.<name>.circuit_breakers.default.cx_pool_open: 0
cluster.<name>.circuit_breakers.default.rq_open: 0
cluster.<name>.circuit_breakers.default.rq_pending_open: 0
cluster.<name>.circuit_breakers.default.rq_retry_open: 0
cluster.<name>.circuit_breakers.high.cx_open: 0
cluster.<name>.circuit_breakers.high.cx_pool_open: 0
cluster.<name>.circuit_breakers.high.rq_open: 0
cluster.<name>.circuit_breakers.high.rq_pending_open: 0
cluster.<name>.circuit_breakers.high.rq_retry_open: 0
cluster.<name>.internal.upstream_rq_completed: 3
cluster.<name>.ssl.connection_error: 0
cluster.<name>.ssl.fail_verify_cert_hash: 0
cluster.<name>.ssl.fail_verify_error: 0
cluster.<name>.ssl.fail_verify_no_cert: 0
cluster.<name>.ssl.fail_verify_san: 0
cluster.<name>.ssl.handshake: 0
cluster.<name>.ssl.no_certificate: 0
cluster.<name>.ssl.ocsp_staple_failed: 0
cluster.<name>.ssl.ocsp_staple_omitted: 0
cluster.<name>.ssl.ocsp_staple_requests: 0
cluster.<name>.ssl.ocsp_staple_responses: 0
cluster.<name>.ssl.session_reused: 0
cluster.<name>.upstream_cx_connect_fail: 3
cluster.<name>.upstream_cx_connect_timeout: 0
cluster.<name>.upstream_cx_destroy_with_active_rq: 0
cluster.<name>.upstream_cx_idle_timeout: 0
cluster.<name>.upstream_cx_protocol_error: 0
cluster.<name>.upstream_cx_total: 3
cluster.<name>.upstream_rq_active: 0
cluster.<name>.upstream_rq_completed: 3
cluster.<name>.upstream_rq_pending_active: 0
cluster.<name>.upstream_rq_pending_total: 3
cluster.<name>.upstream_rq_timeout: 0

Logs:
See description above.

@jstewmon jstewmon added bug triage Issue requires triage labels Jun 15, 2021
@alyssawilk alyssawilk added investigate Potential bug that needs verification and removed triage Issue requires triage labels Jun 21, 2021
@alyssawilk
Copy link
Contributor

cc @ggreenway

@jstewmon
Copy link
Contributor Author

Before I realized support had been dropped, I tried configuring the upstream TLS context to support ECDHE-ECDSA-AES128-SHA. Envoy seems to silently ignore this configuration and falls back to the default. I'd expect a warning if some cipher suites aren't supported and an error if none of them are supported.

transport_socket:
  name: envoy.transport_sockets.tls
  typed_config:
    '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
    common_tls_context:
      tls_params: {"cipher_suites":["ECDHE-ECDSA-AES128-SHA"]}

@ggreenway
Copy link
Contributor

Before I realized support had been dropped, I tried configuring the upstream TLS context to support ECDHE-ECDSA-AES128-SHA. Envoy seems to silently ignore this configuration and falls back to the default.

That's very surprising to me. Can you check if there was anything in the logs about this when you ran with that configuration?

@jstewmon
Copy link
Contributor Author

This is the info log including one request:

 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:330] initializing epoch 0 (base id=0, hot restart version=11.104)
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:332] statically linked extensions:
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.dubbo_proxy.route_matchers: default
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.http.cache: envoy.extensions.http.cache.simple
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.wasm.runtime: envoy.wasm.runtime.null, envoy.wasm.runtime.v8
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.resolvers: envoy.ip
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.matching.common_inputs: envoy.matching.common_inputs.environment_variable
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, starttls, tls
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.skywalking, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.bootstrap: envoy.bootstrap.wasm, envoy.extensions.network.socket_interface.default_socket_interface
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.upstreams: envoy.filters.connection_pools.tcp.generic
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.http.stateful_header_formatters: preserve_case
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.compression.decompressor: envoy.compression.brotli.decompressor, envoy.compression.gzip.decompressor
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.ext_proc, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.composite, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.ext_proc, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.gzip, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.oauth2, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.squash, envoy.filters.http.tap, envoy.filters.http.wasm, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash, match-wrapper
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.dubbo_proxy.protocols: dubbo
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.health_checkers: envoy.health_checkers.redis
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.thrift_proxy.transports: auto, framed, header, unframed
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.open_telemetry, envoy.access_loggers.stderr, envoy.access_loggers.stdout, envoy.access_loggers.tcp_grpc, envoy.access_loggers.wasm, envoy.file_access_log, envoy.http_grpc_access_log, envoy.open_telemetry_access_log, envoy.stderr_access_log, envoy.stdout_access_log, envoy.tcp_grpc_access_log, envoy.wasm_access_log
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.tls.cert_validator: envoy.tls.cert_validator.default, envoy.tls.cert_validator.spiffe
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.upstream_options: envoy.extensions.upstreams.http.v3.HttpProtocolOptions, envoy.upstreams.http.http_protocol_options
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.matching.http.input: request-headers, request-trailers, response-headers, response-trailers
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.matching.action: composite-action, skip
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.rate_limit_descriptors: envoy.rate_limit_descriptors.expr
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, tls
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.matching.input_matchers: envoy.matching.matchers.consistent_hashing
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.wasm, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.retry_priorities: envoy.retry_priorities.previous_priorities
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.compression.compressor: envoy.compression.brotli.compressor, envoy.compression.gzip.compressor
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.stat_sinks.wasm, envoy.statsd
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.dubbo_proxy.serializers: dubbo.hessian2
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
 [2021-06-21 17:59:00.947][17][info][main] [source/server/server.cc:334]   envoy.request_id: envoy.request_id.uuid
 [2021-06-21 17:59:00.974][17][warning][misc] [source/common/protobuf/message_validator_impl.cc:21] Deprecated field: type envoy.config.bootstrap.v3.Admin Using deprecated option 'envoy.config.bootstrap.v3.Admin.access_log_path' from file bootstrap.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/version_history/version_history for details. If continued use of this field is absolutely necessary, see https://www.envoyproxy.io/docs/envoy/latest/configuration/operations/runtime#using-runtime-overrides-for-deprecated-features for how to apply a temporary and highly discouraged override.
 [2021-06-21 17:59:00.974][17][info][main] [source/server/server.cc:350] HTTP header map info:
 [2021-06-21 17:59:00.975][17][info][main] [source/server/server.cc:353]   request header map: 632 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-method,authentication,authorization,cache-control,cdn-loop,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-ot-span-context,x-request-id
 [2021-06-21 17:59:00.975][17][info][main] [source/server/server.cc:353]   request trailer map: 144 bytes:
 [2021-06-21 17:59:00.975][17][info][main] [source/server/server.cc:353]   response header map: 440 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
 [2021-06-21 17:59:00.975][17][info][main] [source/server/server.cc:353]   response trailer map: 168 bytes: grpc-message,grpc-status
 [2021-06-21 17:59:00.976][17][info][main] [source/server/server.cc:500] admin address: 0.0.0.0:9901
 [2021-06-21 17:59:00.977][17][info][main] [source/server/server.cc:667] runtime: layers:
   - name: base
     static_layer:
       re2:
         max_program_size:
           error_level: 300
 [2021-06-21 17:59:00.978][17][info][config] [source/server/configuration_impl.cc:128] loading tracing configuration
 [2021-06-21 17:59:00.978][17][info][config] [source/server/configuration_impl.cc:88] loading 0 static secret(s)
 [2021-06-21 17:59:00.978][17][info][config] [source/server/configuration_impl.cc:94] loading 16 cluster(s)
 [2021-06-21 17:59:00.998][17][info][config] [source/server/configuration_impl.cc:98] loading 2 listener(s)
 [2021-06-21 17:59:01.006][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:188] envoy_on_response() function not found. Lua filter will not hook responses.
 [2021-06-21 17:59:01.007][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:183] envoy_on_request() function not found. Lua filter will not hook requests.
 [2021-06-21 17:59:01.007][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:183] envoy_on_request() function not found. Lua filter will not hook requests.
 [2021-06-21 17:59:01.008][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:188] envoy_on_response() function not found. Lua filter will not hook responses.
 [2021-06-21 17:59:01.008][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:188] envoy_on_response() function not found. Lua filter will not hook responses.
 [2021-06-21 17:59:01.009][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:188] envoy_on_response() function not found. Lua filter will not hook responses.
 [2021-06-21 17:59:01.015][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:188] envoy_on_response() function not found. Lua filter will not hook responses.
 [2021-06-21 17:59:01.016][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:183] envoy_on_request() function not found. Lua filter will not hook requests.
 [2021-06-21 17:59:01.016][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:183] envoy_on_request() function not found. Lua filter will not hook requests.
 [2021-06-21 17:59:01.016][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:188] envoy_on_response() function not found. Lua filter will not hook responses.
 [2021-06-21 17:59:01.017][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:188] envoy_on_response() function not found. Lua filter will not hook responses.
 [2021-06-21 17:59:01.017][17][info][lua] [source/extensions/filters/http/lua/lua_filter.cc:188] envoy_on_response() function not found. Lua filter will not hook responses.
 [2021-06-21 17:59:01.020][17][info][config] [source/server/configuration_impl.cc:110] loading stats configuration
 [2021-06-21 17:59:01.020][17][info][main] [source/server/server.cc:764] starting main dispatch loop
 [2021-06-21 17:59:01.112][17][info][runtime] [source/common/runtime/runtime_impl.cc:428] RTDS has finished initialization
 [2021-06-21 17:59:01.112][17][info][upstream] [source/common/upstream/cluster_manager_impl.cc:192] cm init: all clusters initialized
 [2021-06-21 17:59:01.112][17][info][main] [source/server/server.cc:745] all clusters initialized. initializing init manager
 [2021-06-21 17:59:01.112][17][info][config] [source/server/listener_manager_impl.cc:888] all dependencies initialized. starting workers
 [2021-06-21 17:59:01.114][17][warning][main] [source/server/server.cc:642] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
 {"resStatusCodeDetails":"upstream_reset_before_response_started{connection_failure}","resBytes":91,"reqHeaderXClientTraceId":null,"upstreamHost":"172.26.83.124:30143","reqPath":"/","userJwtSub":null,"routeName":"webApi.default","resHeaderContentType":"text/plain","resStatusCode":503,"reqHeaderXForwardedFor":"172.25.0.1","reqAuthority":"<hostname>","timingDuration":39,"reqId":"19b9784c-956e-45f9-a980-3e10dd854526","timingUpstream":null,"downstreamRemoteAddress":"172.25.0.1","reqMethod":"GET","reqIpTags":null,"reqBytes":0,"time":"1624298353737","resFlags":"UF","resHeaderXEnvoyOverloaded":null,"reqHeaderXClntReqUuid":null,"reqProtocol":"HTTP/1.1","resHeaderXEnvoyRatelimited":null,"reqHeaderUserAgent":"curl/7.64.1","upstreamTransportFailureReason":null,"userJwtJti":null,"upstreamCluster":"<clusterName>"}

This is the cluster configuration:

    - name: <clusterName>
      type: STRICT_DNS
      connect_timeout: 0.5s
      dns_lookup_family: V4_ONLY
      ignore_health_on_host_removal: true
      respect_dns_ttl: true
      lb_policy: RANDOM
      transport_socket:
        name: envoy.transport_sockets.tls
        typed_config:
          '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
          common_tls_context:
            tls_params: {"cipher_suites":["ECDHE-ECDSA-AES128-SHA"]}
            validation_context:
              trusted_ca:
                filename: /etc/ssl/certs/ca-certificates.crt
          sni: <hostname>
      load_assignment:
        cluster_name: <clusterName>
        endpoints:
          - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: <hostname>
                      port_value: 30143

@jstewmon
Copy link
Contributor Author

I'm running envoy in docker with the envoy-alpine:v1.18.3 image.

@ggreenway
Copy link
Contributor

Huh, not sure why it's not working as expected. I would have expected to hit this error:

throw EnvoyException(fmt::format("Failed to initialize cipher suites {}. The following "

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale stalebot believes this issue/PR has not been touched recently label Jul 21, 2021
@lambdai
Copy link
Contributor

lambdai commented Jul 22, 2021

Before I realized support had been dropped, I tried configuring the upstream TLS context to support ECDHE-ECDSA-AES128-SHA. Envoy seems to silently ignore this configuration and falls back to the default. I'd expect a warning if some cipher suites aren't supported and an error if none of them are supported.

transport_socket:
  name: envoy.transport_sockets.tls
  typed_config:
    '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
    common_tls_context:
      tls_params: {"cipher_suites":["ECDHE-ECDSA-AES128-SHA"]}

ECDHE-ECDSA-AES128-SHA in the supported list if you are using boringssl. I think the same result in openssl.

But if tls 1.3 is negotiated, this suite is ignored.
https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto

(repeated string) If specified, the TLS listener will only support the specified cipher list when negotiating TLS 1.0-1.2 (this setting has no effect when negotiating TLS 1.3).

@lambdai
Copy link
Contributor

lambdai commented Jul 22, 2021

Huh, not sure why it's not working as expected. I would have expected to hit this error:

throw EnvoyException(fmt::format("Failed to initialize cipher suites {}. The following "

The portability is decided by the underlying library. either build or link time, you might see NACK LDS or SDS if that suite is not supported along with this exception.
However, if you build with boringssl, that cipher suite is supported.

This log won't be seen upon ssl handshake as per the above NACK

@github-actions github-actions bot removed the stale stalebot believes this issue/PR has not been touched recently label Jul 22, 2021
@github-actions
Copy link

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale stalebot believes this issue/PR has not been touched recently label Aug 21, 2021
@github-actions
Copy link

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted" or "no stalebot". Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug investigate Potential bug that needs verification stale stalebot believes this issue/PR has not been touched recently
Projects
None yet
Development

No branches or pull requests

4 participants