Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lets talk about v8's memory settings #5595

Closed
spalger opened this issue Dec 8, 2015 · 4 comments
Closed

Lets talk about v8's memory settings #5595

spalger opened this issue Dec 8, 2015 · 4 comments
Labels
Team:Docs Team:Operations Team label for Operations Team

Comments

@spalger
Copy link
Contributor

spalger commented Dec 8, 2015

As #5170 has pointed out, the way that v8 manages memory isn't compatible with every person's deployment or desires. In order to change these settings v8 exposes a set of command line flags that we will be allowing users to set once #5451 is merged.

We should provide some documentation about how to use these settings in Kibana, and I think we could use this article from heroku as inspiration.

A snippet:

Node (V8) uses a lazy and greedy garbage collector. With its default limit of about 1.5 GB, it sometimes waits until it absolutely has to before reclaiming unused memory. If your memory usage is increasing, it might not be a leak - but rather node's usual lazy behavior.

@palecur palecur self-assigned this Dec 8, 2015
@palecur palecur added the v4.4.0 label Dec 8, 2015
@spalger
Copy link
Contributor Author

spalger commented Dec 9, 2015

Additional information on the issue can be found here: https://blog.risingstack.com/finding-a-memory-leak-in-node-js/

The TL;DR version

In our particular case the service was running on a small instance, with only 512MB of memory. As it turned out, the application didn't leak any memory, simply the GC didn't start collecting unreferenced objects.

@rashidkpc rashidkpc assigned rashidkpc and unassigned palecur Dec 28, 2015
@rashidkpc rashidkpc removed their assignment Jan 28, 2016
@tylersmalley
Copy link
Contributor

I have created a PR which sets the --max-old-space-size to 256MB, except when you are running in dev mode with --dev.

One discussion we have had is that Node should be treated as an implementation detail and not a documented interface for the application.

@jrwren
Copy link

jrwren commented Jun 10, 2016

I'm using the 4.5.1 deb from packages.elasti.co and the memory usage surprised me.

I start the service and never visit the http listening socket and memory usage still grows.

It starts at ~90MB. After 30min it is using ~125MB. After an hour it is using 140MB. After two hours it is using ~200MB. After 3 hours it is using ~265MB. I stopped at ~3hr 45min and usage was ~316MB.

https://gist.github.com/anonymous/a6cf1550d738f9b3b669e6c5a74f6d8f

I investigated because I left kibana running, it was entirely unused, but I left it running for a couple of weeks and the memory usage was well over 1GB. This surprised me.

I've worked around it by adding LimitRSS=150M to /lib/systemd/system/kibana.service as a work around for my case where I don't want it using too much memory and I'm happy to let systemd restart it for me every hour or so.

@schersh
Copy link
Contributor

schersh commented Apr 19, 2019

Closing out this issue because it doesn't have any recent activity. If you still feel this issue needs to be addressed, feel free to open it back up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Docs Team:Operations Team label for Operations Team
Projects
None yet
Development

No branches or pull requests

7 participants