Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics doesn't work (loadKubeMetrics failed) 404 page not found #6510

Open
OHaimanov opened this issue Nov 3, 2022 · 48 comments
Open

Metrics doesn't work (loadKubeMetrics failed) 404 page not found #6510

OHaimanov opened this issue Nov 3, 2022 · 48 comments
Labels
area/metrics All the things related to metrics bug Something isn't working

Comments

@OHaimanov
Copy link

Describe the bug
Lens app doesn't display metrics, in developer tools console message (loadKubeMetrics failed error: "404 page not found\n")
Tested on embedded Lens Metrics and kube-prometheus-stack (chart version: 41.7.3)

To Reproduce
Steps to reproduce the behavior:

  1. Go to cluster
  2. Click on Settings -> Extensions -> Lens metrics
  3. Enable all metrics
  4. Click on nodes, cluster, pods

Expected behavior
Metrics for Cluster, Nodes, Pods are shown

Screenshots
Screenshot 2022-11-04 at 00 59 23

Environment (please complete the following information):

  • Lens Version: 2022.10.311317-latest
  • OS: MacOS
  • EKS: v1.21.14
@OHaimanov OHaimanov added the bug Something isn't working label Nov 3, 2022
@OHaimanov
Copy link
Author

@Nokel81 Nokel81 added the area/metrics All the things related to metrics label Nov 7, 2022
@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 7, 2022

  1. Please check that node-exporter-stack is enabled.
  2. Those metrics are not retrieved from Prometheus but instead are retrieved from kube metrics. We query /apis/metrics.k8s.io/v1beta1/pods/<namespace> for those metrics, so if that resource path doesn't exist that is why you are not getting metrics.

@OHaimanov
Copy link
Author

OHaimanov commented Nov 7, 2022

Please check that node-exporter-stack is enabled.
Those metrics are not retrieved from Prometheus but instead are retrieved from kube metrics. We query /apis/metrics.k8s.io/v1beta1/pods/ for those metrics, so if that resource path doesn't exist that is why you are not getting metrics.

Yes node-exporter-stack installed, but Metrics Server by default not installed in AWS EKS (https://docs.aws.amazon.com/eks/latest/userguide/metrics-server.html)
So as i properly understand we need additionally install Metrics Server to our cluster to make this work, am i right?

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 7, 2022

Yes if you want metrics to show up there you will need metrics server setup. Though from the stack trace that should have only shown up when you open a node details panel.

If you are asking about why the metrics bars are not showing up on the node list view that is a different issue...

@OHaimanov
Copy link
Author

OHaimanov commented Nov 7, 2022

If you are asking about why the metrics bars are not showing up on the node list view that is a different issue...

And what problem is?

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 7, 2022

Probably that there are no metrics to retrieve or the provider is misconfigured.

@OHaimanov
Copy link
Author

Probably that there are no metrics to retrieve or the provider is misconfigured.

In Grafana all data displayed without issues, what configuration i can paste to help you?

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 7, 2022

Under cluster settings what is the "Metrics Provider" configured to?

@OHaimanov
Copy link
Author

OHaimanov commented Nov 7, 2022

Under cluster settings what is the "Metrics Provider" configured to?

I tried different,
But now Prometheus Operator with empty field "PROMETHEUS SERVICE ADDRESS"

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 7, 2022

Which version of prometheus are you currently using?

@OHaimanov
Copy link
Author

Which version of prometheus are you currently using?

Chart: kube-prometheus-stack
Version: 41.7.3

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 7, 2022

Can you try using PrometheusHelm?

@OHaimanov
Copy link
Author

PrometheusHelm

You mean this one? https://artifacthub.io/packages/helm/prometheus-community/prometheus

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 7, 2022

No I mean changing this setting

image

@OHaimanov
Copy link
Author

OHaimanov commented Nov 7, 2022

Ah, got it, do I need to place path to the service or leave it blank?

@OHaimanov
Copy link
Author

Screenshot 2022-11-07 at 23 04 29

No result with this config

@trallnag
Copy link

trallnag commented Nov 9, 2022

I think something changed with the last update. Yesterday it was still working for me. Now after the update with the same configuration for Prometheus Operator no metrics are showing up.

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 9, 2022

Nothing has been changed from an open-lens perspective with regards to metrics recently. The last change really was https://github.com/lensapp/lens/releases/tag/v6.1.0-alpha.1 about 2 months ago

@trallnag
Copy link

trallnag commented Nov 9, 2022

Okay, thanks for the quick response

@trallnag
Copy link

trallnag commented Nov 9, 2022

@OHaimanov, since you are using the kube-prometheus-stack chart, the "Prometheus Operator" option should be the right one. The following config worked for me in the past. For some reason it stopped working today... Although I must say that I have been tinkering around with kube-prometheus-stack.

image

Note that the /prometheus is only necessary if Prometheus is served under the according subpath.

@OHaimanov
Copy link
Author

@OHaimanov, since you are using the kube-prometheus-stack the PrometheusOperator option should be the right one.

Yeah, i also think so, and tried also this option without any result
Also I am install metrics server and in rancher it start showing cpu and memory, but in lens, error from console disappeared, but no data in UI

@OHaimanov
Copy link
Author

Does lens write any logs to system?

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 9, 2022

Yes it does. It should be under ~/Library/Logs/Lens

@OHaimanov
Copy link
Author

Yes it does. It should be under ~/Library/Logs/Lens

warn: [METRICS-ROUTE]: failed to get metrics for clusterId=: Metrics not available {"stack":"Error: Metrics not available\n at /Applications/Lens.app/Contents/Resources/app.asar/static/build/main.js:1:416744\n at process.processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async Promise.all (index 0)\n at async /Applications/Lens.app/Contents/Resources/app.asar/static/build/main.js:1:417492\n at async Object.route (/Applications/Lens.app/Contents/Resources/app.asar/static/build/main.js:1:407233)\n at async a.route (/Applications/Lens.app/Contents/Resources/app.asar/static/build/main.js:1:409650)"}

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 9, 2022

Sigh... I guess the best I can do is get some better logging into the next version we release. Sorry about this.

@OHaimanov
Copy link
Author

OHaimanov commented Nov 9, 2022

Sigh... I guess the best I can do is get some better logging into the next version we release. Sorry about this.

Ok, np, let's wait for new release)
JFYI Kubecost for example fetch data without issues with http://kube-prometheus-stack-prometheus.monitoring.svc:9090

@trallnag
Copy link

trallnag commented Nov 9, 2022

@Nokel81, where are the logs located on Windows?

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 9, 2022

Should be %AppData%\Lens\Logs iirc

@trallnag
Copy link

trallnag commented Nov 9, 2022

Just tried it again and now the metrics reappeared. No idea what has changed

@OHaimanov
Copy link
Author

@Nokel81 is metrics use kube-proxy in flow?

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 9, 2022

Yes, all our requests go through a kube proxy

@OHaimanov
Copy link
Author

Yes, all our requests go through a kube proxy

Hmmm, ok let me check one thing

@OHaimanov
Copy link
Author

OHaimanov commented Nov 9, 2022

Yes, all our requests go through a kube proxy

Nah, my idea related to this issue aws/containers-roadmap#657 doesn't work, i've updated cluster to 1.22, kube-proxy green in prometheus, but in lens no result

@imageschool
Copy link

imageschool commented Nov 10, 2022

@trallnag
O.o if metrics appeared, what are your current settings (i.e different to default) on both [Server Setting - Metrics] & Helm?

(+) If you did, how did you apply https://github.com/lensapp/lens/blob/master/troubleshooting/custom-prometheus.md ?

@OHaimanov
Copy link
Author

OHaimanov commented Nov 10, 2022

@trallnag O.o if metrics appeared, what are your current settings (i.e different to default) on both [Server Setting - Metrics] & Helm?

(+) If you did, how did you apply https://github.com/lensapp/lens/blob/master/troubleshooting/custom-prometheus.md ?

Just add this to values of kube-prometheus-stack

      prometheus:
        monitor:
          metricRelabelings:
          - action: replace
            regex: (.*)
            replacement: $1
            sourceLabels:
            - __meta_kubernetes_pod_node_name
            targetLabel: kubernetes_node
    kubelet:
      serviceMonitor:
        resourcePath: "/metrics/resource"
        metricRelabelings:
        - action: replace
          sourceLabels:
          - node
          targetLabel: instance```

@imageschool
Copy link

@trallnag O.o if metrics appeared, what are your current settings (i.e different to default) on both [Server Setting - Metrics] & Helm?
(+) If you did, how did you apply https://github.com/lensapp/lens/blob/master/troubleshooting/custom-prometheus.md ?

Just add this to values of kube-prometheus-stack

      prometheus:
        monitor:
          metricRelabelings:
          - action: replace
            regex: (.*)
            replacement: $1
            sourceLabels:
            - __meta_kubernetes_pod_node_name
            targetLabel: kubernetes_node
    kubelet:
      serviceMonitor:
        resourcePath: "/metrics/resource"
        metricRelabelings:
        - action: replace
          sourceLabels:
          - node
          targetLabel: instance```

Yeah I applied the same values for kube-prometheus-stack Helm as below,

prometheus-node-exporter:
  prometheus:
    monitor:
      enabled: true
      metricRelabelings:
      - action: replace
        regex: (.*)
        replacement: $1
        sourceLabels:
        - __meta_kubernetes_pod_node_name
        targetLabel: kubernetes_node

and

kubelet:
  enabled: true
  serviceMonitor:
    metricRelabelings:
    - action: replace
      sourceLabels:
      - node
      targetLabel: instance

image

Mine still does not work :/ Locally similar to all of you above, showing below logs.

warn: [METRICS-ROUTE]: failed to get metrics for clusterId=xxx: Metrics not available {"stack":"Error: Metrics not available\n    at /Applications/Lens.app/Contents/Resources/app.asar/static/build/main.js:1:416757\n    a
t process.processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at async Promise.all (index 1)\n    at async /Applications/Lens.app/Contents/Resources/app.asar/static/build/main.js:1:417505\n    at async Object.route (/Applications
/Lens.app/Contents/Resources/app.asar/static/build/main.js:1:407246)\n    at async a.route (/Applications/Lens.app/Contents/Resources/app.asar/static/build/main.js:1:409663)"}

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 14, 2022

The "Prometheus Operator" provider looks for a kube service with the label selector of operated-prometheus=true.

@OHaimanov
Copy link
Author

The "Prometheus Operator" provider looks for a kube service with the label selector of operated-prometheus=true.

Screenshot 2022-11-14 at 16 55 42

@imageschool
Copy link

operated-prometheus=

@Nokel81
It is already enabled by default for me as well.

image

@imageschool
Copy link

imageschool commented Nov 16, 2022

Is there any (alternative) solution to fix this? ;(
I miss my metrics on Lens :P

@Nokel81
Copy link
Collaborator

Nokel81 commented Nov 16, 2022

Until #6576 is merged it is hard to diagnose why this is happening.

@imageschool
Copy link

@Nokel81 I have following error stack.

[0] warn:    ┏ [METRICS-ROUTE]: failed to get metrics for clusterId=bf5a8d06fa892a342094edf8835c4b42: Metrics not available +6ms
[0] warn:    ┃ [ 1] Error: Metrics not available
[0] warn:    ┃ [ 2]     at loadMetricHelper (/Users/XXXX/Downloads/lens-6.2.0/static/build/main.js:45121:31)
[0] warn:    ┃ [ 3]     at process.processTicksAndRejections (node:internal/process/task_queues:96:5)
[0] warn:    ┃ [ 4]     at async Promise.all (index 0)
[0] warn:    ┃ [ 5]     at async /Users/XXXX/Downloads/lens-6.2.0/static/build/main.js:45162:32
[0] warn:    ┃ [ 6]     at async Object.route (/Users/XXXX/Downloads/lens-6.2.0/static/build/main.js:43974:32)
[0] warn:    ┃ [ 7]     at async Router.route (/Users/XXXX/Downloads/lens-6.2.0/static/build/main.js:44247:9)
[0] warn:    ┃ [ 8]     at async LensProxy.handleRequest (/Users/XXXX/Downloads/lens-6.2.0/static/build/main.js:41912:9)
[0] warn:    ┃ [ 9] Cause: RequestError: Error: socket hang up
[0] warn:    ┃ [10]     RequestError: Error: socket hang up
[0] warn:    ┃ [11]         at new RequestError (/Users/XXXX/Downloads/lens-6.2.0/node_modules/request-promise-core/lib/errors.js:14:15)
[0] warn:    ┃ [12]         at plumbing.callback (/Users/XXXX/Downloads/lens-6.2.0/node_modules/request-promise-core/lib/plumbing.js:87:29)
[0] warn:    ┃ [13]         at Request.RP$callback [as _callback] (/Users/XXXX/Downloads/lens-6.2.0/node_modules/request-promise-core/lib/plumbing.js:46:31)
[0] warn:    ┃ [14]         at self.callback (/Users/XXXX/Downloads/lens-6.2.0/node_modules/request/request.js:185:22)
[0] warn:    ┃ [15]         at Request.emit (node:events:526:28)
[0] warn:    ┃ [16]         at Request.emit (node:domain:475:12)
[0] warn:    ┃ [17]         at Request.onRequestError (/Users/XXXX/Downloads/lens-6.2.0/node_modules/request/request.js:877:8)
[0] warn:    ┃ [18]         at ClientRequest.emit (node:events:526:28)
[0] warn:    ┃ [19]         at ClientRequest.emit (node:domain:475:12)
[0] warn:    ┃ [20]         at Socket.socketOnEnd (node:_http_client:466:9) {
[0] warn:    ┃ [21]       cause: Error: socket hang up
[0] warn:    ┃ [22]           at connResetException (node:internal/errors:691:14)
[0] warn:    ┃ [23]           at Socket.socketOnEnd (node:_http_client:466:23)
[0] warn:    ┃ [24]           at Socket.emit (node:events:538:35)
[0] warn:    ┃ [25]           at Socket.emit (node:domain:475:12)
[0] warn:    ┃ [26]           at endReadableNT (node:internal/streams/readable:1345:12)
[0] warn:    ┃ [27]           at process.processTicksAndRejections (node:internal/process/task_queues:83:21) {
[0] warn:    ┃ [28]         code: 'ECONNRESET'
[0] warn:    ┃ [29]       },
[0] warn:    ┃ [30]       error: Error: socket hang up
[0] warn:    ┃ [31]           at connResetException (node:internal/errors:691:14)
[0] warn:    ┃ [32]           at Socket.socketOnEnd (node:_http_client:466:23)
[0] warn:    ┃ [33]           at Socket.emit (node:events:538:35)
[0] warn:    ┃ [34]           at Socket.emit (node:domain:475:12)
[0] warn:    ┃ [35]           at endReadableNT (node:internal/streams/readable:1345:12)
[0] warn:    ┃ [36]           at process.processTicksAndRejections (node:internal/process/task_queues:83:21) {
[0] warn:    ┃ [37]         code: 'ECONNRESET'
[0] warn:    ┃ [38]       },
[0] warn:    ┃ [39]       options: {
[0] warn:    ┃ [40]         timeout: 0,
[0] warn:    ┃ [41]         resolveWithFullResponse: false,
[0] warn:    ┃ [42]         json: true,
[0] warn:    ┃ [43]         method: 'POST',
[0] warn:    ┃ [44]         form: {
[0] warn:    ┃ [45]           query: 'sum(node_memory_MemTotal_bytes * on (pod,namespace) group_left(node) kube_pod_info{node=~"lambda-server|lambda-server-2|lambda-server-cpu-1"} - (node_memory_MemFree_bytes * on (pod,namespace) group_left(node) kube_pod_info{node=~"lambda-server|lambda-server-2|lambda-server-cpu-1"} + node_memory_Buffers_bytes * on (pod,namespace) group_left(node) kube_pod_info{node=~"lambda-server|lambda-server-2|lambda-server-cpu-1"} + node_memory_Cached_bytes * on (pod,namespace) group_left(node) kube_pod_info{node=~"lambda-server|lambda-server-2|lambda-server-cpu-1"}))',
[0] warn:    ┃ [46]           start: '1668735048',
[0] warn:    ┃ [47]           end: '1668738648',
[0] warn:    ┃ [48]           step: '60',
[0] warn:    ┃ [49]           kubernetes_namespace: ''
[0] warn:    ┃ [50]         },
[0] warn:    ┃ [51]         headers: { Host: 'bf5a8d06fa892a342094edf8835c4b42.localhost:53298' },
[0] warn:    ┃ [52]         uri: 'http://localhost:53298/api-kube/api/v1/namespaces/prometheus/services/prometheus-operated:9090/proxy/api/v1/query_range',
[0] warn:    ┃ [53]         callback: [Function: RP$callback],
[0] warn:    ┃ [54]         transform: undefined,
[0] warn:    ┃ [55]         simple: true,
[0] warn:    ┃ [56]         transform2xxOnly: false
[0] warn:    ┃ [57]       },
warn:    ┃ [58]       response: undefined
[0] warn:    ┃ [59]     }
[0] warn:    ┃ [60] Cause: Error: socket hang up
[0] warn:    ┃ [61]     Error: socket hang up
[0] warn:    ┃ [62]         at connResetException (node:internal/errors:691:14)
[0] warn:    ┃ [63]         at Socket.socketOnEnd (node:_http_client:466:23)
[0] warn:    ┃ [64]         at Socket.emit (node:events:538:35)
[0] warn:    ┃ [65]         at Socket.emit (node:domain:475:12)
[0] warn:    ┃ [66]         at endReadableNT (node:internal/streams/readable:1345:12)
[0] warn:    ┃ [67]         at process.processTicksAndRejections (node:internal/process/task_queues:83:21) {
[0] warn:    ┃ [68]       code: 'ECONNRESET'
[0] warn:    ┗ [69]     }

@admssa
Copy link

admssa commented Jun 27, 2023

Have you had any luck with this? I can see that lens was able to connect to remote prometheus service with kubectl port-forward, I even can see the metrics in my web browser but not in the lens itself.

warn: [METRICS-ROUTE]: failed to get metrics for clusterId=ccaaeb9a371eb328ede51fdf8da7e1ed: Metrics not available {"stack":"Error: Metrics not available\n    at loadMetricHelper (/Applications/Lens.app/Contents/Resources/app.asar/node_modules/@k8slens/core/static/build/library/main.js:49771:31)\n    at process.processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at async Promise.all (index 15)\n    at async /Applications/Lens.app/Contents/Resources/app.asar/node_modules/@k8slens/core/static/build/library/main.js:49816:36\n    at async Object.route (/Applications/Lens.app/Contents/Resources/app.asar/node_modules/@k8slens/core/static/build/library/main.js:48738:32)\n    at async Router.route (/Applications/Lens.app/Contents/Resources/app.asar/node_modules/@k8slens/core/static/build/library/main.js:48969:9)\n    at async LensProxy.handleRequest (/Applications/Lens.app/Contents/Resources/app.asar/node_modules/@k8slens/core/static/build/library/main.js:46327:9)"}

@admssa
Copy link

admssa commented Aug 31, 2023

Funny thing - it started working when we drained nodes under the Prometheus and Prometheus moved to another node. For 2 clusters it moved to different kinds of nodes for another 2 clusters it moved to the same kind of node, but a new one. It's working for all our clusters now.

@JWebDev
Copy link

JWebDev commented Sep 19, 2023

Hello.
I have exactly the same problem.
Is there a solution? Can someone suggest something?
@admssa
@OHaimanov
@imageschool
@trallnag

This solution doesn't work either: https://github.com/lensapp/lens/blob/master/troubleshooting/custom-prometheus.md#manual
I looked at several metrics from here, I have them all. Probably the labels don't match somewhere. https://github.com/lensapp/lens/tree/master/packages/technical-features/prometheus/src

There is a connection to the service, Lens automatically detects the service.
image

If anyone needs it, here is an example of the service. Don't forget to add appropriate labels to the deployment.

apiVersion: v1
kind: Service
metadata:
  name: lens-metrics-service
  namespace: prometheus
  labels:
    app: prometheus
    component: server
    heritage: Helm
    release: prometheus
spec:
  selector:
    app: prometheus
    component: server
    heritage: Helm
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 9090

Now let's move on to the errors. I won't show anything new here.

Lens logs.

info: [CONTEXT-HANDLER]: using helm14 as prometheus provider for clusterId=1c2fcf927d9e8fc6637202d3fdbcc498
warn: [METRICS-ROUTE]: failed to get metrics for clusterId=1c2fcf927d9e8fc6637202d3fdbcc498: Metrics not available {"stack":"Error: Metrics not available\n    at loadMetricHelper (C:\\Users\\JDev\\AppData\\Local\\Programs\\Lens\\resources\\app.asar\\node_modules\\@lensapp\\core\\static\\build\\library\\main.js:35829:31)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async Promise.all (index 3)\n    at async C:\\Users\\JDev\\AppData\\Local\\Programs\\Lens\\resources\\app.asar\\node_modules\\@lensapp\\core\\static\\build\\library\\main.js:35874:36\n    at async Object.route (C:\\Users\\JDev\\AppData\\Local\\Programs\\Lens\\resources\\app.asar\\node_modules\\@lensapp\\core\\static\\build\\library\\main.js:34859:32)\n    at async Router.route (C:\\Users\\JDev\\AppData\\Local\\Programs\\Lens\\resources\\app.asar\\node_modules\\@lensapp\\core\\static\\build\\library\\main.js:35090:9)\n    at async LensProxy.handleRequest (C:\\Users\\JDev\\AppData\\Local\\Programs\\Lens\\resources\\app.asar\\node_modules\\@lensapp\\core\\static\\build\\library\\main.js:33367:13)"}
info: [CLUSTER]: refresh {"accessible":true,"disconnected":false,"id":"1c2fcf927d9e8fc6637202d3fdbcc333","name":"infra","online":true,"ready":true}
info: [CLUSTER]: refresh {"accessible":true,"disconnected":false,"id":"8843900644ce20e50683bc6d40a31bfb","name":"app","online":true,"ready":true}
info: [CONTEXT-HANDLER]: using helm14 as prometheus provider for clusterId=1c2fcf927d9e8fc6637202d3fdbcc333
warn: [METRICS-ROUTE]: failed to get metrics for clusterId=1c2fcf927d9e8fc6637202d3fdbcc333: Metrics not available {"stack":"Error: Metrics not available\n    at loadMetricHelper (C:\\Users\\JDev\\AppData\\Local\\Programs\\Lens\\resources\\app.asar\\node_modules\\@lensapp\\core\\static\\build\\library\\main.js:35829:31)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async Promise.all (index 6)\n    at async C:\\Users\\JDev\\AppData\\Local\\Programs\\Lens\\resources\\app.asar\\node_modules\\@lensapp\\core\\static\\build\\library\\main.js:35874:36\n    at async Object.route (C:\\Users\\JDev\\AppData\\Local\\Programs\\Lens\\resources\\app.asar\\node_modules\\@lensapp\\core\\static\\build\\library\\main.js:34859:32)\n    at async Router.route (C:\\Users\\JDev\\AppData\\Local\\Programs\\Lens\\resources\\app.asar\\node_modules\\@lensapp\\core\\static\\build\\library\\main.js:35090:9)\n    at async LensProxy.handleRequest (C:\\Users\\JDev\\AppData\\Local\\Programs\\Lens\\resources\\app.asar\\node_modules\\@lensapp\\core\\static\\build\\library\\main.js:33367:13)"}
info: [CLUSTER]: refresh {"accessible":true,"disconnected":false,"id":"1c2fcf927d9e8fc6637202d3fdbcc333","name":"infra","online":true,"ready":true}

Lens developer console.
image

@Nokel81 could you please tell me how I can see which request(metric) has an error and what exactly is missing? If I know which metric isn't working, I can see what's wrong(probably other labels).

Prometheus installation: https://artifacthub.io/packages/helm/prometheus-community/prometheus

At the moment, the latest version of Lens and Prometheus are installed.

@olafrv
Copy link

olafrv commented Feb 29, 2024

Same for me in AWS EKS K8S v1.25.11-eks.

warn: [METRICS-ROUTE]: failed to get metrics for clusterId=41b09429a38178ebaa826491a8a6ac2b: Metrics not available {"stack":"Error: Metrics not available\n at loadMetricHelper (/Applications/Lens.app/Contents/Resources/app.asar/node_modules/@lensapp/core-main/dist/index.js:11865:19)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async Promise.all (index 0)\n at async /Applications/Lens.app/Contents/Resources/app.asar/node_modules/@lensapp/core-main/dist/index.js:11908:28\n at async Object.route (/Applications/Lens.app/Contents/Resources/app.asar/node_modules/@lensapp/core-main/dist/index.js:3583:31)\n at async Router.route (/Applications/Lens.app/Contents/Resources/app.asar/node_modules/@lensapp/core-main/dist/index.js:3417:5)\n at async LensProxy.handleRequest (/Applications/Lens.app/Contents/Resources/app.asar/node_modules/@lensapp/core-main/dist/index.js:3384:7)"}

Seem related to #7888

@germanattanasio
Copy link

germanattanasio commented May 8, 2024

If you are referring to an EKS cluster, the issue might relate to the node-to-node security group. In the AWS Terraform module for EKS, you can specify additional rules for the security group that manages node-to-node communication as follows:

node_security_group_additional_rules = {
  # Allow all traffic between nodes
  ingress_self_all = {
    description = "Allow all traffic between nodes on all ports/protocols"
    protocol    = "all"
    from_port   = 0 
    to_port     = 0
    type        = "ingress"
    self        = true
  }

  # Allow HTTP traffic from the cluster API to the nodes
  ingress_api_to_nodes = {
    description                   = "Cluster API access to Kubernetes nodes"
    protocol                      = "tcp"
    from_port                     = 80 
    to_port                       = 65535
    type                          = "ingress"
    source_cluster_security_group = true
  }
}

This change will allow lens to talk to prometheus which could be in a different node

@ddodoo
Copy link

ddodoo commented Jul 19, 2024

@germanattanasio

Many thanks, the node_security_group_additional_rules resolved my metrics display in lens for eks cluster

Kubernetes version: 1.30
Lens version: 6.10.38

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/metrics All the things related to metrics bug Something isn't working
Projects
None yet
Development

No branches or pull requests

9 participants