Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

Kibana with Ingress - Endpoint has no IP #216

Closed
Sikwan opened this issue Jul 9, 2019 · 5 comments
Closed

Kibana with Ingress - Endpoint has no IP #216

Sikwan opened this issue Jul 9, 2019 · 5 comments

Comments

@Sikwan
Copy link

Sikwan commented Jul 9, 2019

Chart version: 7.2.0

Kubernetes version: 1.13.6-gke.13

Kubernetes provider: GKE

Helm Version: 2.14.1

helm get release output

REVISION: 1
RELEASED: Tue Jul  9 14:54:53 2019
CHART: kibana-7.2.0
USER-SUPPLIED VALUES:
elasticsearchHosts: http://elasticsearch-master:9200
ingress:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/auth-realm: Monitoring Authentication Required - kibanaadmin
    nginx.ingress.kubernetes.io/auth-secret: monitoring-ingress-auth
    nginx.ingress.kubernetes.io/auth-type: basic
  enabled: true
  hosts:
  - beta-monitoring.xeecloud.io
  path: /hizen/kibana
  tls:
  - hosts:
    - beta-monitoring.xeecloud.io
    secretName: beta-monitoring-xeecloud-io-tls
kibanaConfig:
  kibana.yml: |
    server.basePath: /hizen/kibana
    server.rewriteBasePath: true
service:
  nodePort: 32601
  type: NodePort

COMPUTED VALUES:
affinity: {}
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
elasticsearchHosts: http://elasticsearch-master:9200
elasticsearchURL: ""
extraEnvs: []
fullnameOverride: ""
healthCheckPath: /app/kibana
httpPort: 5601
image: docker.elastic.co/kibana/kibana
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.2.0
ingress:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/auth-realm: Monitoring Authentication Required - kibanaadmin
    nginx.ingress.kubernetes.io/auth-secret: monitoring-ingress-auth
    nginx.ingress.kubernetes.io/auth-type: basic
  enabled: true
  hosts:
  - beta-monitoring.xeecloud.io
  path: /hizen/kibana
  tls:
  - hosts:
    - beta-monitoring.xeecloud.io
    secretName: beta-monitoring-xeecloud-io-tls
kibanaConfig:
  kibana.yml: |
    server.basePath: /hizen/kibana
    server.rewriteBasePath: true
maxUnavailable: 1
nameOverride: ""
nodeSelector: {}
podSecurityContext:
  fsGroup: 1000
priorityClassName: ""
protocol: http
readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 3
  timeoutSeconds: 5
replicas: 1
resources:
  limits:
    cpu: 1000m
    memory: 1Gi
  requests:
    cpu: 100m
    memory: 500m
secretMounts: []
securityContext:
  capabilities:
    drop:
    - ALL
  runAsNonRoot: true
  runAsUser: 1000
serverHost: 0.0.0.0
service:
  annotations: {}
  nodePort: 32601
  port: 5601
  type: NodePort
serviceAccount: ""
tolerations: []
updateStrategy:
  type: Recreate

HOOKS:
MANIFEST:

---
# Source: kibana/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kibana-kibana-config
  labels:
    app: kibana
    release: "kibana"
data:
  kibana.yml: |
    server.basePath: /hizen/kibana
    server.rewriteBasePath: true
---
# Source: kibana/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: kibana-kibana
  labels:
    app: kibana
    release: "kibana"
    heritage: Tiller
spec:
  type: NodePort
  ports:
    - port: 5601
      nodePort: 32601
      protocol: TCP
      name: http
      targetPort: 5601
  selector:
    app: kibana
    release: "kibana"
---
# Source: kibana/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana-kibana
  labels:
    app: kibana
    release: "kibana"
spec:
  replicas: 1
  strategy:
    type: Recreate

  selector:
    matchLabels:
      app: kibana
      release: "kibana"
  template:
    metadata:
      labels:
        app: kibana
        release: "kibana"
      annotations:

        configchecksum: dac182bd9bd08905e383049bb219ef778ba0adaddda1c4b57fdb9fc6b94d59d
    spec:
      securityContext:
        fsGroup: 1000

      volumes:
        - name: kibanaconfig
          configMap:
            name: kibana-kibana-config
      containers:
      - name: kibana
        securityContext:
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          runAsUser: 1000

        image: "docker.elastic.co/kibana/kibana:7.2.0"
        env:
          - name: ELASTICSEARCH_HOSTS
            value: "http://elasticsearch-master:9200"
          - name: SERVER_HOST
            value: "0.0.0.0"
        readinessProbe:
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 3
          timeoutSeconds: 5

          exec:
            command:
              - sh
              - -c
              - |
                #!/usr/bin/env bash -e
                http () {
                    local path="${1}"
                    set -- -XGET -s --fail

                    if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
                      set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
                    fi

                    curl -k "$@" "http://localhost:5601${path}"
                }

                http "/app/kibana"
        ports:
        - containerPort: 5601
        resources:
          limits:
            cpu: 1000m
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 500m

        volumeMounts:
          - name: kibanaconfig
            mountPath: /usr/share/kibana/config/kibana.yml
            subPath: kibana.yml
---
# Source: kibana/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana-kibana
  labels:
    app: kibana
    release: kibana
    heritage: Tiller
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/auth-realm: Monitoring Authentication Required - kibanaadmin
    nginx.ingress.kubernetes.io/auth-secret: monitoring-ingress-auth
    nginx.ingress.kubernetes.io/auth-type: basic

spec:
  tls:
    - hosts:
      - beta-monitoring.xeecloud.io
      secretName: beta-monitoring-xeecloud-io-tls

  rules:
    - host: beta-monitoring.xeecloud.io
      http:
        paths:
          - path: /hizen/kibana
            backend:
              serviceName: kibana-kibana
              servicePort: 5601

Describe the bug:
Trying to add an ingress to my Kibana, I always end up having a 503 in my Ingress because the kibana endpoint does not have IPs.

Steps to reproduce:

  1. Deploy the helm chart with an Ingress configured
  2. Try to access Kibana through the Ingress
  3. Describe the kibana endpoint

Expected behavior:

Endpoint should have an IP and be reachable through the ingress.

Provide logs and/or server output (if relevant):

NGINX Ingress Controller log when calling the endpoint (note that we have - - - - instead of an IP)

10.0.3.1 - [10.0.3.1] - kibanaadmin [09/Jul/2019:13:01:03 +0000] "GET /hizen/kibana HTTP/2.0" 503 600 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 146 0.000 [monitoring-kibana-kibana-5601] - - - - b38299d11a01c44678218eb7525521b9

Any additional context:
I tried with ClusterIP and NodePort, same behaviour.

values file

elasticsearchHosts: "http://elasticsearch-master:9200"

kibanaConfig:
  kibana.yml: |
    server.basePath: /hizen/kibana
    server.rewriteBasePath: true

service:
  type: NodePort
  nodePort: 32601

ingress:
  enabled: true
  path: /hizen/kibana
  hosts:
    - beta-monitoring.xeecloud.io
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: monitoring-ingress-auth
    nginx.ingress.kubernetes.io/auth-realm: "Monitoring Authentication Required - kibanaadmin"
  tls:
    - secretName: beta-monitoring-xeecloud-io-tls
      hosts:
        - beta-monitoring.xeecloud.io

describe of the endpoint

Name:         kibana-kibana
Namespace:    monitoring
Labels:       app=kibana
              heritage=Tiller
              release=kibana
Annotations:  <none>
Subsets:
  Addresses:          <none>
  NotReadyAddresses:  10.0.1.83
  Ports:
    Name  Port  Protocol
    ----  ----  --------
    http  5601  TCP

Events:  <none>

Note:
I tried setting the server.host to 0.0.0.0 as well as mentionned in #156 but it does not work either and does not seem to be linked to an endpoint.

Crazybus added a commit that referenced this issue Jul 9, 2019
Make it clear that this setting needs to be updated if you are using a
custom basePath like in #216
@Crazybus
Copy link
Contributor

Crazybus commented Jul 9, 2019

I think the issue here is that the health check is failing because it hasn't been configured to look at the basePath. If the health check is failing then the pod isn't added into the service.

Can you try setting:

healthCheckPath: "/hizen/kibana/app/kibana"

I'm also working on a PR to make sure this is mentioned in the readme. More details are in the original issue #103

@Sikwan
Copy link
Author

Sikwan commented Jul 10, 2019

Hi @Crazybus, thanks a lot for your reply! I tried that out but I should have mentioned within my bug report that doing a port-forward on my service allow me to reach Kibana without any issues.

The Pod is in Ready state and all probes seems to be working well. Only the ingress is broken, which led me to investigate the endpoint.

(I have to admit I am not sure why a service without endpoint is working with Port-forward but not with ingress, but I am not that well versed in K8S internals ...)

I trashed my cluster and updated my helm repo, then redeployed it (with version 7.2.0). Now My master nodes are not starting either and the endpoint is empty as well.

apiVersion: v1
kind: Endpoints
metadata:
  creationTimestamp: "2019-07-09T21:21:10Z"
  labels:
    app: elasticsearch-master
    chart: elasticsearch-7.2.0
    heritage: Tiller
    release: elasticsearch-master
  name: elasticsearch-master
  namespace: monitoring
  resourceVersion: "18151443"
  selfLink: /api/v1/namespaces/monitoring/endpoints/elasticsearch-master
  uid: 78b6e9b9-a28f-11e9-a12f-42010a840048
subsets:
- notReadyAddresses:
  - ip: 10.0.1.109
    nodeName: gke-production-tooling-default-357f587a-gr94
    targetRef:
      kind: Pod
      name: elasticsearch-master-0
      namespace: monitoring
      resourceVersion: "18151440"
      uid: 78bc7724-a28f-11e9-a12f-42010a840048
  - ip: 10.0.2.13
    nodeName: gke-production-tooling-default-f470d1ba-pv9f
    targetRef:
      kind: Pod
      name: elasticsearch-master-1
      namespace: monitoring
      resourceVersion: "18151439"
      uid: 78bed52e-a28f-11e9-a12f-42010a840048
  - ip: 10.0.4.131
    nodeName: gke-production-tooling-default-f789ea72-rx7g
    targetRef:
      kind: Pod
      name: elasticsearch-master-2
      namespace: monitoring
      resourceVersion: "18151434"
      uid: 78c34f08-a28f-11e9-a12f-42010a840048
  ports:
  - name: transport
    port: 9300
    protocol: TCP
  - name: http
    port: 9200
    protocol: TCP

It has been stucked in that state for the last 9hours without much changes. My issue might be from my cluster rather than from this chart, I am in the process of deploying other charts just to see if somehow I cannot attribute IPs or if it is linked with the charts from this repo.

@Sikwan
Copy link
Author

Sikwan commented Jul 10, 2019

Quick update, just installed a random chart (MediaWiki from stable) and it does assign IPs without any issues.

Seems like something is not working with the Elastic and Kibana charts, but I am not sure what yet.

@Sikwan
Copy link
Author

Sikwan commented Jul 10, 2019

Discard those last two comments, turns out my persistent disks were not removed when trashing the helm chart (it used to be the case on the old one I was using!) so I kept corruption within the cluster.

Your remark solved it, thanks a lot!

@Sikwan Sikwan closed this as completed Jul 10, 2019
@Crazybus
Copy link
Contributor

I'm glad you got it working and thanks for following up!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants