Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

962 fixes - fix status of some targets are DOWN in prometheus at creating kind cluster step #963

Closed
wants to merge 3 commits into from

Conversation

engchina
Copy link

Fix status of some targets are DOWN in prometheus at creating kind cluster step.

After investigation, the reason is that kube-controller-manager, kube-scheduler and etcd are listening on localhost.
And follow the document troubleshooting-prometheus, we also know metricsBindAddress of kube-proxy should be binding to 0.0.0.0, it's better to do it at creating kind cluster step.

So I've modified creating kind cluster script for solve this issue.

Signed-off-by: engchina [email protected]

engchina and others added 3 commits March 26, 2023 08:41
Tweaked comments to not appear when copied via the copy command.
@desagar
Copy link
Contributor

desagar commented Apr 20, 2023

Hello, thank you for researching this issue, and for your pull request. Your change does solve the problem of these monitors being down in Prometheus operator.

However, Kind is not meant to be used in production use cases, and we don’t want to recommend listening on 0.0.0.0 by default, in order to avoid any potential security risk, because those 4 pods use host networking in a Kind cluster. For example, Kind recommends against using 0.0.0.0 for the api server address in this document: https://kind.sigs.k8s.io/docs/user/configuration/#networking

Individual users can explicitly enable these settings based on the troubleshooting information, if they want metrics for these services, and they determine it is safe to do in their environment.

This github issue has some discussion about this topic: prometheus-community/helm-charts#204

@desagar desagar closed this Apr 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants