You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We got a low memory alert on our servers last night. I logged in to find the rsyslog process had ballooned to consume 3GB of memory (it normally peaks at around 50MB). I also found the /var/log/api-umbrella/rsyslog/elasticsearch_error.log has ballooned to around 3GB (normally it's totally empty). The log file was reporting various elasticsearch indexing errors, but all of our data was actually appearing in Elasticsearch just fine. I killed the rsyslog process and let it automatically restart. That alleviated the immediate memory issues, but the memory use of rsyslog has continued to climb back up throughout the day.
After poking around, the issue stems from upgrading rsyslog from v8.27.0 to v8.28.0, which occurred a couple days ago (rsyslog also got upgraded when we upgraded the API Umbrella package to address some security updates: #393). Under rsyslog v8.28.0 the way we've specifically configured rsyslog leads to a pretty severe memory leak, since each request that gets logged ends up increasing rsyslog's memory use (so the memory use rises in conjunction with traffic).
This issue should be addressed on the servers now.
The servers are upgraded to the newly release API Umbrella v0.14.4, which rolls rsyslog back to v8.27.0 to address the underlying memory growth.
Since reverting back to v8.27.0 on our servers, rsyslog's memory use has been holding steady at around 8MB for about an hour now. Previously, the memory use was climbing pretty immediately (since any log traffic would cause the memory to balloon), so I think this is resolved.
I've added some explicit tests to check rsyslog's memory use in our automated test suite: NREL/api-umbrella@98af6f7 So while the test is pretty specific to this issue, hopefully it will prevent this kind of thing from cropping up in the future when upgrading rsyslog.
We got a low memory alert on our servers last night. I logged in to find the rsyslog process had ballooned to consume 3GB of memory (it normally peaks at around 50MB). I also found the
/var/log/api-umbrella/rsyslog/elasticsearch_error.log
has ballooned to around 3GB (normally it's totally empty). The log file was reporting various elasticsearch indexing errors, but all of our data was actually appearing in Elasticsearch just fine. I killed the rsyslog process and let it automatically restart. That alleviated the immediate memory issues, but the memory use of rsyslog has continued to climb back up throughout the day.After poking around, the issue stems from upgrading rsyslog from v8.27.0 to v8.28.0, which occurred a couple days ago (rsyslog also got upgraded when we upgraded the API Umbrella package to address some security updates: #393). Under rsyslog v8.28.0 the way we've specifically configured rsyslog leads to a pretty severe memory leak, since each request that gets logged ends up increasing rsyslog's memory use (so the memory use rises in conjunction with traffic).
I've put together a more detailed bug report for rsyslog (rsyslog/rsyslog#1668 and test cases: https://github.com/GUI/rsyslog-omelasticsearch-leak), but in the meantime, I think we need to roll API Umbrella's package back to use rsyslog v8.27.0.
The text was updated successfully, but these errors were encountered: