Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remote ip address not preserved in logs #3431

Closed
mjhuber opened this issue Nov 16, 2018 · 18 comments
Closed

remote ip address not preserved in logs #3431

mjhuber opened this issue Nov 16, 2018 · 18 comments

Comments

@mjhuber
Copy link
Contributor

mjhuber commented Nov 16, 2018

NGINX Ingress controller version: 0.20.0
Kubernetes version (use kubectl version): 1.10.7-gke.11

  • Cloud provider or hardware configuration: GKE

What happened:
Nginx logs show a source IP of the internal kube-proxy IP address, even when the LoadBalancer is set externalTrafficPolicy: Local. Ex:

10.41.12.1 - [10.41.12.1] - - [16/Nov/2018:21:19:32 +0000] "GET /test HTTP/2.0" 403 186 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36" 1370 0.000 [staging-smart-80] - - - - 58dc0b0bb0ef204b6ef0448133e4c760

What you expected to happen:
The source IP in the logs should show the external remote ip address of the client.

How to reproduce it (as minimally and precisely as possible):

  1. Install ingress-nginx in GKE cluster.
  2. Make sure LoadBalancer service is set to externalTrafficPolicy: Local.

Anything else we need to know:

I'm using Kubernetes 1.10.7-gke.11 with externalTrafficPolicy: Local set on the LoadBalancer. Requests via HTTP and HTTPS always have a remote IP address set to the internal IP of the kube-proxy.

10.41.12.1 - [10.41.12.1] - - [16/Nov/2018:21:19:32 +0000] "GET /test HTTP/2.0" 403 186 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36" 1370 0.000 [staging-smart-80] - - - - 58dc0b0bb0ef204b6ef0448133e4c760

I have tried adding use-proxy-protocols: "true" to the ConfigMap as others have suggested but that didn't change it.

Kubernetes version: 1.10.7-gke.11
ingress-nginx Helm Chart: nginx-ingress-0.31.0

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.31.0
    component: controller
    heritage: Tiller
    release: nginx-ingress-test
  name: nginx-ingress-test-controller
  namespace: ingress-test
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx-ingress
      component: controller
      release: nginx-ingress-test
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: nginx-ingress
        component: controller
        release: nginx-ingress-test
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=ingress-test/nginx-ingress-test-default-backend
        - --publish-service=ingress-test/nginx-ingress-test-controller
        - --election-id=ingress-controller-leader
        - --ingress-class=nginx-test
        - --configmap=ingress-test/nginx-ingress-test-controller
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 120
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: nginx-ingress-controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 33
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: nginx-ingress-test
      serviceAccountName: nginx-ingress-test
      terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.31.0
    component: controller
    heritage: Tiller
    release: nginx-ingress-test
  name: nginx-ingress-test-controller
  namespace: ingress-test
spec:
  clusterIP: 10.42.95.125
  externalTrafficPolicy: Local
  healthCheckNodePort: 31576
  ports:
  - name: http
    nodePort: 32510
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    nodePort: 32031
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    app: nginx-ingress
    component: controller
    release: nginx-ingress-test
  sessionAffinity: None
  type: LoadBalancer
---
apiVersion: v1
data:
  enable-vts-status: "false"
  use-proxy-protocols: "true" #tried enabling/disabling this, did not affect
kind: ConfigMap
metadata:
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.31.0
    component: controller
    heritage: Tiller
    release: nginx-ingress-test
  name: nginx-ingress-test-controller
  namespace: ingress-test
@celamb4
Copy link

celamb4 commented Dec 4, 2018

We are seeing the same behavior on GKE with nginx-ingress.
Environment GKE 1.11.2
Chart: nginx-ingress-0.30.0

$the_real_ip = kube-proxy IP --> Incorrect.

https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/log-format.md

@aledbf
Copy link
Member

aledbf commented Dec 4, 2018

$the_real_ip = kube-proxy IP --> Incorrect.

That means the service type=LoadBalancer is missing externalTrafficPolicy: Local

@celamb4
Copy link

celamb4 commented Dec 5, 2018

Got it working. Started working after recreating the pods. Not sure if related or just delay in log streams.

@mjhuber
Copy link
Contributor Author

mjhuber commented Dec 6, 2018

Got it working. Started working after recreating the pods. Not sure if related or just delay in log streams.

@celamb4 did you make any config changes before you recreated the pods?

@aledbf
Copy link
Member

aledbf commented Dec 7, 2018

Closing. The ingress controller does not create any cloud resource (i.e. the service type=LoadBalancer)

Please open an issue in the main Kubernetes repository

@aledbf aledbf closed this as completed Dec 7, 2018
@aledbf
Copy link
Member

aledbf commented Dec 7, 2018

use-proxy-protocols: "true"

Just in case the GCP load balancer doesn't support proxy protocol (setting this will break nginx)

@aledbf
Copy link
Member

aledbf commented Dec 7, 2018

@mjhuber you could try to change the ingress-nginx service (like adding an annotation) to trigger a sync of the service type=LoadBalancer

Also, check in the gcp console you only see one instance of the nodes as healthy (this means externalTrafficPolicy: Local it's working correctly)

@sudermanjr
Copy link

I annotated the service and the load balancers are setup correctly, only pointing to the nodes containing ingress controllers. I will watch the logs and see if any more incorrect IPs come through.

@sudermanjr
Copy link

sudermanjr commented Dec 7, 2018

Okay, so on one cluster where we are seeing this issue the annotating of the service helped, and seems to have re-synced the service correctly. However, the issue on the other cluster seems different.

Here's a log snippet:

nginx-ingress-public-controller-74ff478ccd-wkljt nginx-ingress-controller 10.41.40.1 - [10.41.40.1]

The curious part is that the 10.41.0.0/16 subnet is the pod subnet, not the node subnet. So this issue is different than the other.

EDIT: Looking at more logs. All of the requests to the ingress controller seem to be coming from some variant of 10.42.XX.1

@anoopswsib
Copy link

Great link thanks for notifying us

@gustavovalverde
Copy link

For people landing here and using Nginx as NodePort behind gce-ingress, and trying to preserve the client's Source IP using Nginx, there's no need to use Proxy Protocol or other complex configurations, installing with this configuration will do it (use an updated Nginx):

  config:
    enable-real-ip: "true"
    use-forwarded-headers: "true"
    proxy-real-ip-cidr: "130.211.0.0/22,35.191.0.0/[L7-LB_EXTERNAL_IP]/32"

@jsoref
Copy link
Contributor

jsoref commented Nov 24, 2020

@gustavovalverde, thanks, fwiw, you're missing a 16, in the last line:

  config:
    enable-real-ip: "true"
    use-forwarded-headers: "true"
    proxy-real-ip-cidr: "130.211.0.0/22,35.191.0.0/16,[L7-LB_EXTERNAL_IP]/32"

@villesau
Copy link

Can someone elaborate 130.211.0.0/22,35.191.0.0/[L7-LB_EXTERNAL_IP]/32 and 130.211.0.0/22,35.191.0.0/16,[L7-LB_EXTERNAL_IP]/32 a bit?

@jsoref
Copy link
Contributor

jsoref commented May 13, 2022

What do you mean by elaborate?

Note that the fragment there has a typo (see my follow-up):
#3431 (comment)

@villesau
Copy link

@jsoref I'd like to understand how did you came up with the string 130.211.0.0/22,35.191.0.0/16,[L7-LB_EXTERNAL_IP]/32 :)

@jsoref
Copy link
Contributor

jsoref commented May 13, 2022

https://cloud.google.com/load-balancing/docs/https

The IP address of the Google Front End (GFE) that connected to the backend. These IP addresses are in the 130.211.0.0/22 and 35.191.0.0/16 ranges.

@jsoref
Copy link
Contributor

jsoref commented May 13, 2022

The L7-LB_EXTERNAL_IP is the IP you've been assigned by Google -- the thing someone uses to connect to the ingress from the internet. So we have a couple of those, but they're not things I feel like sharing here (although they aren't really secrets, since all of our customers connect to them, and they're obviously discoverable via DNS).

@villesau
Copy link

Ah, ok. Thanks for clarifying @jsoref!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants