Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TLS redirect_cleartext_from doesn't preserve path #1463

Closed
bpehling opened this issue Apr 24, 2019 · 27 comments
Closed

TLS redirect_cleartext_from doesn't preserve path #1463

bpehling opened this issue Apr 24, 2019 · 27 comments
Assignees
Milestone

Comments

@bpehling
Copy link

Describe the bug
url path is not preserved with redirect_cleartext_from set

To Reproduce

  1. Follow TLS Termination documentation to create cert and store as kubernetes secret

  2. Deploy ambassador with helm chart 2.2.1 with values:

service:
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v1
      kind: Module
      name: tls
      config:
        server:
          enabled: True
          secret: ambassador-certs
          redirect_cleartext_from: 8080
  1. Deploy httpbin service to test redirect
---
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v1
      kind:  Mapping
      name:  httpbin_mapping
      prefix: /httpbin/
      service: httpbin.org:80
      host_rewrite: httpbin.org
spec:
  ports:
  - name: httpbin
    port: 80
  1. curl the endpoint using http
curl -Li http://hostname/httpbin/
  1. Result: path is not preserved on redirect
HTTP/1.1 301 Moved Permanently
location: https://hostname/
date: Wed, 24 Apr 2019 20:12:19 GMT
server: envoy
content-length: 0

HTTP/2 404 
date: Wed, 24 Apr 2019 20:12:19 GMT
server: envoy

Expected behavior
Path should be preserved and redirect to https://hostname/httpbin/

Versions (please complete the following information):

  • Ambassador: [0.60.0] (using Helm chart 2.2.1)
  • Kubernetes environment: [AKS]
  • Version [1.12.7]

Additional context

@gsagula
Copy link
Contributor

gsagula commented Apr 24, 2019

@kflynn I think this is related to the failing test that I was trying to fix.

@gsagula gsagula self-assigned this Apr 24, 2019
@bpehling
Copy link
Author

bpehling commented Apr 25, 2019

@gsagula Additional note, the issue is not present in Ambassador Pro using the same Ambassador OSS version (OSS 0.60.0 Pro 0.4.0)

@gsagula
Copy link
Contributor

gsagula commented Apr 25, 2019

Strange. If it is a bug in Ambassador, it should fail consistently. I will take a look at.

@gsagula
Copy link
Contributor

gsagula commented Apr 29, 2019

@bpehling I can't reproduce it. Do you have the whole config?

@hpurmann
Copy link

We are facing the same problem with Ambassador 0.60.1 and GKE

@gsagula
Copy link
Contributor

gsagula commented Apr 30, 2019 via email

@hpurmann
Copy link

hpurmann commented May 2, 2019

Ambassador values:

    ambassador:
      replicaCount: 3
      image:
        repository: quay.io/datawire/ambassador
        tag: 0.60.3
        pullPolicy: IfNotPresent
      env:
        AMBASSADOR_ID: api-gateway
      service:
        loadBalancerIP: <IP>
        http:
          enabled: true
          targetPort: 8080
        https:
          enabled: true
          targetPort: 8443
        annotations:
          getambassador.io/config: |
            ---
            apiVersion: ambassador/v1
            kind: Module
            name: ambassador
            ambassador_id: api-gateway
            config:
              service_port: 8443
            ---
            apiVersion: ambassador/v1
            kind: Module
            name: tls
            ambassador_id: api-gateway
            config:
              server:
                enabled: True
                redirect_cleartext_from: 8080
                secret: api-gateway-tls

Service config:

    service:
      annotations:
        getambassador.io/config: |
          ---
          apiVersion: ambassador/v1
          kind: Mapping
          name: service-root-mapping
          ambassador_id: api-gateway
          host: sub.myhost.com
          prefix: ^/$
          prefix_regex: true
          rewrite: /website
          service: "myservice.mynamespace:80"
          bypass_auth: true
          ---
          apiVersion: ambassador/v1
          kind: Mapping
          name: service-website-mapping
          ambassador_id: api-gateway
          host: sub.myhost.com
          prefix: /website
          rewrite: /website
          service: "myservice.mynamespace:80"
          bypass_auth: true
curl http://sub.myhost.com/foobar -v
*   Trying <IP>...
* TCP_NODELAY set
* Connected to sub.myhost.com (<IP>) port 80 (#0)
> GET /foobar HTTP/1.1
> Host: sub.myhost.com
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< location: https://sub.myhost.com/
< date: Thu, 02 May 2019 08:33:05 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host sub.myhost.com left intact

We'd expect the 301 to redirect to https://sub.myhost.com/foobar

@marcosArruda
Copy link

marcosArruda commented May 8, 2019

I think this is not related directly with TLS since I'm having the same problem without Ambassador TLS module. I have ambassador loadbalanced by an AWS ELB:

annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v1
      kind: Mapping
      name: example_events_mapping_health_check
      prefix: /health-check
      precedence: 1
      host: {{ .Values.global.mappingHost }}
      service: {{ include "example-events.name" . }}.{{ .Values.global.namespace }}:{{ .Values.global.service.port }}
      bypass_auth: true
      ---
      apiVersion: ambassador/v1
      kind: Mapping
      name: example_events_mapping_global
      prefix: /
      host: {{ .Values.global.mappingHost }}
      service: {{ include "example-events.name" . }}.{{ .Values.global.namespace }}:{{ .Values.global.service.port }}
$ curl https://events.example.com.br/health-check -v
*   Trying ###.###.###.###...
* TCP_NODELAY set
* Connected to events.example.com.br (##.##.##.##) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=*.example.com.br
*  start date: Sep 13 00:00:00 2018 GMT
*  expire date: Oct 13 12:00:00 2019 GMT
*  subjectAltName: host "events.example.com.br" matched cert's "*.example.com.br"
*  issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
*  SSL certificate verify ok.
> GET /health-check HTTP/1.1
> Host: events.example.com.br
> User-Agent: curl/7.58.0
> Accept: */*
> 
< HTTP/1.1 404 Not Found
< content-type: application/json;charset=UTF-8
< content-length: 103
< x-envoy-upstream-service-time: 6
< date: Tue, 07 May 2019 22:06:14 GMT
< server: envoy
< 
* Connection #0 to host events.example.com.br left intact
{"timestamp":"2019-05-07T22:06:14.447+0000","path":"/","status":404,"error":"Not Found","message":null}

We'd expect a path: /health-check on this last line. and status 200.

@bpehling
Copy link
Author

bpehling commented May 8, 2019

@gsagula I am mistaken, this issue occurs when using pro as well

ambassador deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "10"
  labels:
    app.kubernetes.io/instance: ambassador
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: ambassador
    helm.sh/chart: ambassador-2.3.1
  name: ambassador
  namespace: infra
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: ambassador
      app.kubernetes.io/name: ambassador
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
      labels:
        app.kubernetes.io/instance: ambassador
        app.kubernetes.io/name: ambassador
    spec:
      containers:
      - args:
        - --statsd.listen-udp=:8125
        - --web.listen-address=:9102
        - --statsd.mapping-config=/statsd-exporter/mapping-config.yaml
        image: prom/statsd-exporter:v0.8.1
        imagePullPolicy: IfNotPresent
        name: prometheus-exporter
        ports:
        - containerPort: 9102
          name: metrics
          protocol: TCP
        - containerPort: 8125
          name: listener
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /statsd-exporter/
          name: stats-exporter-mapping-config
          readOnly: true
      - env:
        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        - name: STATSD_ENABLED
          value: "true"
        - name: STATSD_HOST
          value: localhost
        - name: AMBASSADOR_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: quay.io/datawire/ambassador:0.60.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /ambassador/v0/check_alive
            port: admin
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        name: ambassador
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 8443
          name: https
          protocol: TCP
        - containerPort: 8877
          name: admin
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /ambassador/v0/check_ready
            port: admin
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - env:
        - name: REDIS_SOCKET_TYPE
          value: tcp
        - name: REDIS_URL
          value: ambassador-pro-redis:6379
        - name: APRO_AUTH_PORT
          value: "8500"
        - name: GRPC_PORT
          value: "8501"
        - name: DEBUG_PORT
          value: "8502"
        - name: APP_LOG_LEVEL
          value: info
        - name: AMBASSADOR_LICENSE_KEY
          valueFrom:
            secretKeyRef:
              key: key
              name: ambassador-pro-license-key
        image: quay.io/datawire/ambassador_pro:amb-sidecar-0.4.0
        imagePullPolicy: IfNotPresent
        name: ambassador-pro
        ports:
        - containerPort: 8500
          name: grpc-auth
          protocol: TCP
        - containerPort: 8501
          name: grpc-ratelimit
          protocol: TCP
        - containerPort: 8502
          name: http-debug
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        runAsUser: 8888
      serviceAccount: ambassador
      serviceAccountName: ambassador
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: exporterConfiguration
            path: mapping-config.yaml
          name: ambassador-exporter-config
        name: stats-exporter-mapping-config

ambassador service

apiVersion: v1
kind: Service
metadata:
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v1
      kind: Module
      name: ambassador
      config:
        enable_grpc_web: true
        service_port: 8443
        default_label_domain: ambassador
      ---
      apiVersion: ambassador/v1
      kind: Module
      name: tls
      config:
        server:
          enabled: True
          secret: ambassador-cert
          redirect_cleartext_from: 8080
          alpn_protocols: h2,http/1.1
  labels:
    app.kubernetes.io/instance: ambassador
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: ambassador
    helm.sh/chart: ambassador-2.3.1
  name: ambassador
  namespace: infra
spec:
  clusterIP: <IP>
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 31919
    port: 80
    protocol: TCP
    targetPort: 8080
  - name: https
    nodePort: 30894
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    app.kubernetes.io/instance: ambassador
    app.kubernetes.io/name: ambassador
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: <IP>

ambassador-pro service

apiVersion: v1
kind: Service
metadata:
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v1
      kind: AuthService
      name: ambassador-pro-auth
      proto: grpc
      auth_service: 127.0.0.1:8500
      allow_request_body: false # setting this to 'true' allows Plugin and External filters to access the body, but has performance overhead
      ---
      # This mapping needs to exist, but is never actually followed.
      apiVersion: ambassador/v1
      kind: Mapping
      name: callback_mapping
      prefix: /callback
      service: NoTaReAlSeRvIcE
      ---
      apiVersion: ambassador/v1
      kind: RateLimitService
      name: ambassador-pro-ratelimit
      service: 127.0.0.1:8501
  labels:
    app.kubernetes.io/instance: ambassador
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: ambassador
    helm.sh/chart: ambassador-2.3.1
    service: ambassador-pro
  name: ambassador-pro
  namespace: infra
spec:
  clusterIP: <IP>
  ports:
  - name: ratelimit-grpc
    port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

node-hello-world service

apiVersion: v1
kind: Service
metadata:
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v0
      kind: Mapping
      name: node-hello-world
      prefix: /helloworld
      service: node-hello-world.default:8080
  labels:
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: node-hello-world
    app.kubernetes.io/release: node-hello-world
    group: monitor
    helm.sh/chart: node-hello-world-0.1.0
  name: node-hello-world
  namespace: default
spec:
  clusterIP: <IP>
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: http
  selector:
    app.kubernetes.io/name: node-hello-world
    app.kubernetes.io/release: node-hello-world
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

edit: using chart version 2.3.1 (Ambassador version 0.60.3)

@christianhuening
Copy link

The same here with 0.61.1

@christianhuening
Copy link

0.70.0 sadly has the same problem :(

@gsagula
Copy link
Contributor

gsagula commented May 22, 2019

Looking into it now.

@gsagula
Copy link
Contributor

gsagula commented May 22, 2019

I think this may be related envoyproxy/envoy#5292

@gsagula
Copy link
Contributor

gsagula commented May 23, 2019

@christianhuening @bpehling Can you please confirm on your end that the above issue is causing the error? Thank you.

@christianhuening
Copy link

christianhuening commented May 23, 2019

@gsagula this very much looks like the issue, yes. I’ll take a look into our envoy conf to be sure

@christianhuening
Copy link

@gsagula well... in our config the domain is set to [*] So it appears to be a different issue.

@gsagula
Copy link
Contributor

gsagula commented May 23, 2019

@christianhuening Thank you for getting back to me. Would you mind to share your Envoy config, please?

@christianhuening
Copy link

I think the interesting bit is this:

"route_config": {
    "virtual_hosts": [
        {
            "domains": [
                "*"
            ],
            "name": "backend",
            "routes": [
                {
                    "match": {
                        "case_sensitive": true,
                        "prefix": "/.well-known/acme-challenge"
                    },
                    "route": {
                        "priority": null,
                        "timeout": "3.000s",
                        "weighted_clusters": {
                            "clusters": [
                                {
                                    "name": "cluster_acme_challenge_service",
                                    "weight": 100
                                }
                            ]
                        }
                    }
                },

@christianhuening
Copy link

we got it solved for now by adding an nginx sidecar which does the redirect

@bpehling
Copy link
Author

bpehling commented May 23, 2019

My config has "domains": [*] as well

"route_config": {
    "virtual_hosts": [
        {
            "domains": [
                "*"
            ],
            "name": "backend",
            "require_tls": "EXTERNAL_ONLY",
            "routes": [
                {
                    "match": {
                        "prefix": "/"
                    },
                    "redirect": {
                        "https_redirect": true,
                        "path_redirect": "/"
                    }
                }
            ]
        }

@gsagula
Copy link
Contributor

gsagula commented May 23, 2019

@bpehling @christianhuening Thanks for the info. I will need to debug Envoy.

@gsagula
Copy link
Contributor

gsagula commented May 28, 2019

The problem is that we shouldn't be setting "path_redirect": "/" if the intention is to preserve the path in the request. There is a logic in Envoy for that:

if (!path_redirect_.empty()) {
    final_path = path_redirect_.c_str();
  } else {
    ASSERT(headers.Path());
    final_path = headers.Path()->value().getStringView();
    if (strip_query_) {
      size_t path_end = final_path.find("?");
      if (path_end != absl::string_view::npos) {
        final_path = final_path.substr(0, path_end);
      }
    }
  }

I will open a PR shortly with the fix. Thanks!

@gsagula
Copy link
Contributor

gsagula commented May 28, 2019

@bpehling @christianhuening If you want to give it a try quay.io/datawire/ambassador:redirect-path-6f9674a

@christianhuening
Copy link

@trevex can you try it? I am very busy today.

@kflynn kflynn added this to the pedrera milestone May 28, 2019
@bpehling
Copy link
Author

@gsagula I'm getting an error pulling the image

Warning  Failed     13s (x2 over 23s)  kubelet, aks-nodepool1-28725430-2  Failed to pull image "quay.io/datawire/ambassador:redirect-path-6f9674a": rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/datawire/ambassador:redirect-path-6f9674a not found: manifest unknown: manifest unknown

@gsagula
Copy link
Contributor

gsagula commented May 28, 2019

Hey sorry guys, try this one quay.io/gsagula/ambassador:redirect-path-6f9674ae

@bpehling
Copy link
Author

@gsagula It worked for me 👍

@kflynn kflynn closed this as completed Jun 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants