-
Notifications
You must be signed in to change notification settings - Fork 707
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage on .16.0 #813
Comments
How long has the exporter been running on the affected hosts? When initially started, how much memory does the exporter use? Graphing I'd also recommend navigating to http://hostname:9182/debug/pprof/heap and uploading the output here. |
Thanks for following up with me. The box has been up for 49 days so at least that much. I there any metric that records that information?
|
I have a similar thing, but we are on 0.15: go_memstats.*bytes are very constant during the week: go_memstats_alloc_bytes and frees_total: What goes through the roof is windows_process_private_bytes: And for what it's worth, pprof/heap and pprof/allocs: |
I have both versions deployed and .15 also displays the same behavior. Whatever I can do to help please do let me know. @breed808. |
Thanks all, this is really helpful info. I'll get through the heap/alloc dumps, and will try to reproduce the issue. |
It doesn't happen on every machine, but if it happens it seems to be the service collector that is leaking memory. In general the windows_exporter is leaking memory anyways, it's just that it may leak with the service collector a huge amount of memory. |
@breed808, I don't know what @datamuc experience is but it seems like mostly servers with sql server installed get this issue. I have the exporter on application servers and database servers and I have yet to notice anything on the application servers. Another point to add is that I have seeing it on both VM and physical servers. |
Unfortunately the heap dumps aren't too helpful: they're showing similar results to the Are we able to confirm if the excessive memory consumption occurs when disabling all collectors that use WMI as a metric source? These are currently cpu_info, fsrmquota, msmq, mssql, remote_fx, service, system, terminal_services, and thermalzone |
Ok, I've accessed the urls with a browser that gave me the text files. Now I used Invoke-Webrequest and I got the binary stuff: I did some more tests, in any case there was an infinite powershell loop running that hit the /metrics endpoint. In the zip there are 2 directories, host2 are the debug/pprof infos of a host that doesn't leak so much. I've done 2 snapshots in every test. host1 is a host that has a pretty high leak, pprof1 and pprof2 where done with the following config: collectors:
enabled: "cpu,cs,logical_disk,net,os,system,textfile,memory,logon,tcp,terminal_services,thermalzone,iis,process" and pprof3 and pprof4 where done with:
I've disabled all the mentioned collectors (pprof3/4) but it was still leaking. windows_process_private_bytes while the endless loop was running: And here without the loop, just normal prometheus scraping: |
Hmm, those heap dumps are similar to those submitted earlier, with only a few MB allocated 😞 @datamuc from previous comments, it appears the I'll see if I can identify the commit or exporter version where the leaking was introduced. |
Yes, we have deactivated the service collector globally because it contributed a lot to the leaks. It is better now, but still not good, so my guess is that it has something to do with the number of metrics returned? service collector has a lot of metrics. I can do some more testing tomorrow I guess. |
I've been able to reproduce this with the script. The memory leak is present in the Alternatively, it may be a leak on the Windows side when certain WMI and/or Perflib queries are submitted. The Go memory stats aren't showing any leaks, which isn't too helpful. I'll continue testing to see if I can identify the commit that introduced the leak. |
I've did some more testing. If I only enable the textfile exporter, then there is no leak. I tried to investigate a bit. I've seen that the windows_exporter uses a library from the perflib_exporter to access most of the performance_values. Maybe both implementations, the perflib and the old WMI one are leaking? What follows maybe wrong, I've tried to reason about the code: I've looked into what telegraf is doing, because it does a similar thing. They open pdh.dll and call functions from this library. Before they open a new query they close the old one? The Close call leads to this code where they explicitly mention to free some memory. Then I looked into the perflib_exporter, it comes with a perlfib package which provides access to the performance counters. Unlike telegraf they are not using pdh.dll because it is too high level? Anyway, it does syscall.HKEY_PERFORMANCE_DATA to get the counters. So I googled a bit for it and found this page:
But I cannot find the word "close" or "Close" anywhere in perflib_exporters codebase. So I thought, hooray, found the culprit and started a perflib_exporter which doesn't seem to leak at all. Long story short, I still have no clue what is going on. I think it is still possible that the perflib_exporter uses the perflib library a bit differently? Like reusing some datastructures while the windows_exporter is always asking for a new one or so? |
Does anybody have an idea? We are considering to move to telegraf and scrape that for the windows metrics. |
My priorities have shifted for the time being, but I have to come back to this issue pretty soon. I resorted to schedule a daily exporter bounce in the mean time. I can say that I found this issue has been around since the wmi exporter days. I recently logged into a server that we somehow missed on our exporter upgrades and the memory usage was really high. |
@datamuc That looks like a very good find, and even if the perflib exporter appears to not leak, I'd argue it it still incorrect not to close the key. It might require a bit of locking to do this right though, otherwise I think overlapping scrapes might lead to issues. |
@carlpett I've added the mentioned line and compiled a binary. The change didn't help anything regarding the leak. 😞 |
What do you think? Should we try to bring this issue up in the Prometheus & The Ecosystem Community Meeting? Maybe sombody there can help or knows somebody who can help? |
This is not true, other collectors also use wmi. I've started the exporter with
And this doesn't leak at all. So I'm pretty sure now that it leaks somewhere in github.com/StackExchange/wmi or github.com/go-ole/go-ole. The first one is definitly unmaintained, not so sure about the second one. |
That's a good find! I'll see if I can identify which of those libraries are the cause of the leak. |
this should save a leak in windows_exporter: https://docs.microsoft.com/en-us/windows/win32/perfctrs/using-the-registry-functions-to-consume-counter-data > Be sure to use the RegCloseKey function to close the handle to the key > when you are finished obtaining the performance data. This is > important for both the local and remote cases: prometheus-community/windows_exporter#813 (comment)
Hi, same problem here. The patched version (the curve on the right) seems to do better than v0.16.0 (the one on the left) in terms of memory usage: But a leak is still here. Do you have any other idea to fix this ? Edit: the test was performed on master branch, so only the process collector was using StackExchange/wmi, which explains lower memory usage. After removing process collector, memory usage is constant around 20-30MB. |
We removed the process and service collector from our configuration (and added the tcp collector, so it is: [defaults] - service + tcp) now the memory usage is stable. It seems that every collector that makes use of github.com/StackExchange/wmi leaks. |
this should save a leak in windows_exporter: https://docs.microsoft.com/en-us/windows/win32/perfctrs/using-the-registry-functions-to-consume-counter-data > Be sure to use the RegCloseKey function to close the handle to the key > when you are finished obtaining the performance data. This is > important for both the local and remote cases: prometheus-community/windows_exporter#813 (comment)
The patch above was merged into the perflib_exporter library with Can sombody take care of updating the dependency in windows_exporter please? |
@datamuc Thanks for all the work that your are putting in on this. |
I found that telegraf was having a similar issue on Windows Server 2016. influxdata/telegraf#6807 (comment) I don't know enough to be able to diagnose this myself, but looking at github.com/StackExchange/wmi, they are indeed using CoInitializeEx. What are you thoughts about this? |
Sounds promising to look deeper, but to be honest I have no idea. I was just lucky finding the leak in perflib_exporter, I have no idea of windows related programming at all, and just know a little bit go... |
It's been a while since I've last looked at this, but I think disabling the WMI queries in the process collector by default may improve the situation. Would anyone be able to test the branch in #998 to see if the |
Experiencing similar issue with leaks on latest Have the following enabled on both collectors:
enabled: cpu,cs,logical_disk,net,os,service,system,memory,tcp,vmware,process,iis The only difference is Non affected whitelist: "xService.?|windows_exporter.?" Affected whitelist: "xService.?|windows_exporter.?|app.+|antivirus.+|w3wp|Scheduler|xConnector|Ccm.+|xClient|inetinfo|.+agent|.+Agent" I will try to remove few processes and see if it improves Image of the leak |
It's been some time since I last looked at this, but I believe my intention with #998 was to remove a potential leak in the I've reopened #998 in #1062 if anyone would like to test the branch. |
@breed808 Made a few tests and the most stable config was disabling # working config
collectors:
enabled: cpu,cs,logical_disk,net,os,service,system,memory,tcp,vmware,process,iis,netframework_clrexceptions
collector:
process:
whitelist: "xService.?|windows_exporter.?|app.+|antivirus.+|w3wp|Scheduler|xConnector|Ccm.+|xClient|inetinfo|.+agent|.+Agent"
service:
services-where: Name LIKE 'appname%' |
Coming back to the test @breed808. After 2 weeks having 6 servers running the above config I can confirm we stable at around 40-50mb~ of memory usage with no leaks. As soon as we enable |
|
@breed808 I can't replicate it reliably but I am able to replicate it. I am seeing a bit of a trend where one of the instances is gettingmore and more memory as time passes, this started exactly at the problem when I added a rasberry pi as a prometheus server and targetted the windows_exporter with it to collect data every second. I am also hitting the same windows_exporter every second with a local instance of prometheus and a netdata collector on another raspberry pi. Having a single instance did not cause the memory leak but having both made it start happening. My guess would be there is something that is leaving the connection open or it is forcefully closed without letting the server clean up. So for me to reproduce it I had to hit the windows_exporter more than 3 times per second at the very least. |
in windows 2016, i find use wmi query exist mem leak also |
This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs. |
This is not stale, the proccess leaks memory, specially and faster if it is being polled more than 3 times per second for whatever reason. |
Just an idea: An option that might relieve the situation might be that the exporter has an option like --max-memory-consumption and monitors itself, and if it reaches that limit it just restarts itself or something like that. |
Can someone provide some details? Version of windows_exporter Output of as file attachment. If possible, generate a trace:
|
We already had this here: #813 (comment) But it doesn't help, because the memory is lost when interacting with Windows API somewhere. We found the leak in perflib. But there is at least another one in WMI. I can tell because the leakage stops if I disable all the collectors that make use of WMI: |
@datamuc I saw there are pprof exists already exists. However they coming from 0.16 and I would like to start at least form the lastest releases. In mean time, we included perflib libraries into windows_exporter and not longer depends on external contributors (https://github.com/prometheus-community/windows_exporter/tree/master/pkg/perflib) |
Interesting, I will try to come up with something next week. |
Sorry, I don't have access to many windows machines anymore. The two that are left don't leak. So, if somebody still has leaks they have to do #813 (comment) themself. |
This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs. |
Hello!
I have a couple of servers reporting high memory usage by the windows_exporter. I am using the default configuration/collectors. Is there anyway to limit the resource usage on the exporter?
Exporter version:
windows_exporter, version 0.16.0 (branch: master, revision: f316d81d50738eb0410b0748c5dcdc6874afe95a) build user: appveyor-vm\appveyor@appveyor-vm build date: 20210225-10:52:19 go version: go1.15.6 platform: windows/amd64
OS Name: Microsoft Windows Server 2016 Standard
OS Version: 10.0.14393 N/A Build 14393
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
1260 142 12748860 12496424 110,102.69 11388 0 windows_exporter
The text was updated successfully, but these errors were encountered: