-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Kibana with Ingress - Endpoint has no IP #216
Comments
Make it clear that this setting needs to be updated if you are using a custom basePath like in #216
I think the issue here is that the health check is failing because it hasn't been configured to look at the basePath. If the health check is failing then the pod isn't added into the service. Can you try setting:
I'm also working on a PR to make sure this is mentioned in the readme. More details are in the original issue #103 |
Hi @Crazybus, thanks a lot for your reply! I tried that out but I should have mentioned within my bug report that doing a port-forward on my service allow me to reach Kibana without any issues. The Pod is in Ready state and all probes seems to be working well. Only the ingress is broken, which led me to investigate the endpoint. (I have to admit I am not sure why a service without endpoint is working with Port-forward but not with ingress, but I am not that well versed in K8S internals ...) I trashed my cluster and updated my helm repo, then redeployed it (with version 7.2.0). Now My master nodes are not starting either and the endpoint is empty as well.
It has been stucked in that state for the last 9hours without much changes. My issue might be from my cluster rather than from this chart, I am in the process of deploying other charts just to see if somehow I cannot attribute IPs or if it is linked with the charts from this repo. |
Quick update, just installed a random chart (MediaWiki from stable) and it does assign IPs without any issues. Seems like something is not working with the Elastic and Kibana charts, but I am not sure what yet. |
Discard those last two comments, turns out my persistent disks were not removed when trashing the helm chart (it used to be the case on the old one I was using!) so I kept corruption within the cluster. Your remark solved it, thanks a lot! |
I'm glad you got it working and thanks for following up! |
Chart version: 7.2.0
Kubernetes version: 1.13.6-gke.13
Kubernetes provider: GKE
Helm Version: 2.14.1
helm get release
outputDescribe the bug:
Trying to add an ingress to my Kibana, I always end up having a 503 in my Ingress because the kibana endpoint does not have IPs.
Steps to reproduce:
Expected behavior:
Endpoint should have an IP and be reachable through the ingress.
Provide logs and/or server output (if relevant):
NGINX Ingress Controller log when calling the endpoint (note that we have - - - - instead of an IP)
Any additional context:
I tried with ClusterIP and NodePort, same behaviour.
values file
describe of the endpoint
Note:
I tried setting the server.host to 0.0.0.0 as well as mentionned in #156 but it does not work either and does not seem to be linked to an endpoint.
The text was updated successfully, but these errors were encountered: