-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No metrics with 5.3.0-latest.2021125.2 #4435
Comments
I have noticed a delay, you may have to wait a few minutes for the node CPU/Memory metrics to come up. |
Of course. I noticed that with the previous versions. I switched back to 5.2.7 (sorry I noticed I mistyped up in the explanation and put 4.2.7) on Windows 10, because I have work to do. But I left my Linux version up for hours and still no metrics |
OK, thanks. Can you check if this is true for other clusters to see if it is specific to your GKE cluster? The metrics work for me on linux and windows for various clusters |
All clusters (3) we have are GKE. Does not work on any of them. I installed 5.30 on Win 10 again and left it running for 30 minutes (so far) and still no metrics. |
Thanks for the further details. For logs: https://docs.k8slens.dev/main/faq/#where-can-i-find-application-logs If you can try one more thing, go to the Cluster Settings -> Metrics and set the Prometheus option to Helm (assuming it is currently Auto detect). Shouldn't take more than 2 minutes for the metrics to appear (the problem is confirmed otherwise) |
If I set the Prometheus option to Helm it it does not recognize it at all - as if Prometheus was never installed. I tried with both with empty Prometheus service address and with a hard-coded one (monitoring/pcore-kube-prometheus-stac-prometheus:9090). |
Sorry, didn't realize kube-prometheus-stack = prometheus operator. So neither auto detect nor prometheus operator give you full metrics. You've demonstrated it's clearly a 5.3 issue, we'll try to reproduce. Someone else has noticed an issue around this too: #4436 |
The last piece of information that comes to my mind - Our clusters were @ kube-prometheus-stack-18.0.5. This is what I have on one of our prod servers: Yesterday, I updated the Prometheus stack thinking that the older version could be causing the problem. This is what I have on our Dev server today: helm -n monitoring list The Prometheus stack is @ kube-prometheus-stack-20.0.1, but the issue is still there. I hope this helps, thanks |
Exact same issue using AKS (Azure):
I use the |
Where can we download previous versions? That way we can find the version where it broke... |
Also seeing this issue with Azure AKS and Prometheus installed on our clusters. |
Confirmed with microk8s distribution of k8s @ 1.22.3 with prom installed through the |
Since Lens 5.3.0, same here with AKS, k8s 1.20.7. I'm only able to see memory metrics. |
Having the same issue with Lens-5.3.1-latest/20211130.1, we are using the kube-prometheus-stack-21.0.0 with app version 0.52.0 |
I'm seeing same behavior on the new v5.3.2-latest.20211201.1 |
I actually downgraded for this particular bug. Any reason why this is being postponed? |
I'm also affected by this issue, looking forward for it to be fixed! |
It looks like for one reason or another this issue is not going to be resolved soon. Is there a way to tell Lens on Windows to stop checking for updates? Every morning when I start it it tells me that there is a newer version even though I click on the No button every time. It is annoying. Obviously, I am not going to upgrade from a working version to a broken one. |
@georok I just renamed the provider: s3
bucket: lens-binaries
path: /ide
channel: latest
updaterCacheDirName: lens-updater
Just realised you specifically asked after windows version, sorry don't have that at hand. |
Thank you @Paragon1970. I found it on Windows. I will give it a try. thanks, |
Same issue, Lens: 5.3.3-latest.20211223.1, EKS |
I went looking around at https://github.com/lensapp/lens/blob/master/src/main/prometheus/operator.ts to see what metrics Lens is using, and found this:
Notice how there is no longer a "node" label with the hostname, and this is why metrics aren't showing up in Lens. Similar issue exists for kube-state-metrics metrics. As a workaround, you can add "node" label by adding a
or patch existing ServiceMonitor objects for both to add similar:
|
When installing kube-prometheus-stack using helm I had to do this before prometheus-node-exporter would run properly (and for the metrics to appear) |
Still no metrics with 5.3.4 and latest node-exporter helm chart :( |
node-exporter just allows prometheus to scrape node metrics. How did you install Prometheus? |
Can confirm v5.3.4 node metrics are being displayed on macOS. Looks like the fix was sorted in this commit: 35c8b76 |
TLDR: check whether the metrics are in Prometheus in the first place. I was having the same issue. However I dug a bit deeper in and found out that, as I was using k3d to spin my k3s nodes using memory limits, I had no values in Prometheus for the memory usage. I'm using the Helm chart I was using the following command to spin up my local cluster
This implementation uses fake So first, make sure whether your nodes actually report back. On a node shell:
Next you actually want that value in Prometheus. Look for Now it works out of the box again without any additional configuration. I need to find another way to limit the memory of my k3s nodes. |
I can see metrics again now in 5.4.3 (Windows) |
I installed each chart independently, but everything was working until I updated both Lens and Prometheus versions so I'm not quite sure which one is responsible. I also checked from prometheus and node-exporter metrics themselves (like node_memory_MemAvailable_bytes) are available EDIT: I triy to downgrade to 5.2.5 but it's still not working, so I suppose that this problem is related to the Prometheus release and not Lens itself |
Hello, 5.3.4 works well! Many thanks! Gents, again, that was the complex issue. The one "let's say, issue" had to be adopted by lens, the second - 'prom stack' devs had to fix this issue as well (as @nevalla mentioned earlier) - prometheus-community/helm-charts#1631 (Or you may workaround this by your own as discussed in the thread) Please ensure your values are correct.:
AFAIK the issue was found starting 25.x.y helm chart |
I get an access denied, do you still have this file somewhere? |
IMO, the problem is that I don't have "kubernetes_node" field by default in kubernetes-node-exporter: I only have "node", referencing the node name, and so the group by is not working |
Yep, confirmed, I relabeled "node" to "kubernetes_node" and everything is back. It has been changed on prometheus side 2 months ago. so I thing this should be adapted somehow on Lens side: prometheus-community/helm-charts@df8add6#diff-a7d2f872af425efb23e365d063a23bdfcc33de8728c2b7bb4d536b7b8e80981aL1564 |
In which release this fix is included. Still getting the same issue at mac os Monterey 12.2.1 and Lens: 5.3.4-latest.20220120.1. And I could not find a way to install 5.2.7. As mentioned in previous message, the dmg is not downloadable, Access denied :-(. Kindly help. |
@balakumarpg the fix fixed a particular issue that was introduced after 5.2.7. If you have not used it prior to 5.2.7 and you are on the lastes version and now you are getting this error, it is most likely unrelated to this thread, and a problem with your prometheus configuration. |
It hasn't been fixed actually. After the latest "fix" was introduced in 5.3.4, the metrics appeared again when running against older versions of the kube-prometheus-stack. Over the weekend, I upgraded to the latest version (kube-prometheus-stack-32.2.0) and the metrics disappeared again. So, no, it has not been truly fixed. It does not always work out-of the-box. prometheus-node-exporter: |
Still an issue in 5.4.1 with Prometheus 15.5.3. The relabeling above did not work for me. With the latest Prometheus 14.x (14.12.0), it works out of the box. |
Can confirm, this works with Prometheus 14.12.0. Helm chart install: |
Using the following:
Metrics were not showing out of the box. I had to apply the instructions mentionned here for it to finally work. Using
I do not really know the whys and hows this is required, but this has been a painful experience to fix, as it was working out of the box before. I hope this helps someone. |
@ypicard The reason that this is required is that in version 35 of kube-prometheus-stack they changed the default labeling for |
did someone make it work with the latest Lens? :( |
This saved my a$$ thanks a lot !! I used latest kube-prometheus-stack chart 56.8.2 (v0.71.2 app version) |
Describe the bug
After installing the 5.3.0-latest.2021125.2 version (both on Windows 10 and Rocky Linux 8.5) the metrics for GKE based cluster are not available.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The metric should be availabe
Screenshots
If applicable, add screenshots to help explain your problem.
Environment (please complete the following information):
Kubeconfig:
5.2.7 works with the exact same Kubeconfig, so the latter is not an issue
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: