Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node v7.6.0+ memory leak #12019

Closed
clocked0ne opened this issue Mar 24, 2017 · 12 comments
Closed

Node v7.6.0+ memory leak #12019

clocked0ne opened this issue Mar 24, 2017 · 12 comments
Labels
memory Issues and PRs related to the memory management or memory footprint.

Comments

@clocked0ne
Copy link

clocked0ne commented Mar 24, 2017

  • Version: v7.6.0
  • Platform: Windows / Docker AWS POSIX 64-bit
  • Subsystem: V8 5.5 / zlib 1.2.11

We noticed steady and consistent memory leaks in our application when running in production using v7.6.0. At first we attributed this to a section of code using async/await but after reverting this code change the problem persisted. As soon as we reverted the node version used back to v7.5.0 in our Docker container the issue went away completey.

This mirrors the experiences of: https://twitter.com/yaypie/status/838539575605653504

I can provide additional graphs from Cloudwatch showing memory utilisation growing in our application if necessary.

I am surprised no one else seems to have found the same issue or raised it as an Issue or PR so far!

Stuck on v7.5.0 until this is rectified - which is a shame as we were looking forward to trialling async/await

@bnoordhuis
Copy link
Member

Do you have a way for us to reproduce? The simpler the test case, the better.

@bnoordhuis bnoordhuis added the memory Issues and PRs related to the memory management or memory footprint. label Mar 24, 2017
@clocked0ne
Copy link
Author

Can't say I do unfortunately, aside from isolating the commits that changed in our application between the versions (which included the update from Node v7.5.0 to Node v7.6.0) we could not isolate a specific section of code responsible - the memory footprint running on our production server would grow gradually over the space of a few hours, not something we could figure out a way to quickly replicate:

leak.png

The two most likely culprits from changes within 7.6.0 appear to be the upgrades to V8 5.5 / zlib 1.2.11

@bnoordhuis
Copy link
Member

The V8 upgrade seems like the most likely culprit, yes. You could build v7.6.0 from source and revert the zlib upgrade to verify it's not that. The V8 upgrade is less trivial, it's not a single commit.

That said, does "memory leak" mean that the process eventually dies with an out-of-memory error or does it just have a bigger memory footprint than before?

@clocked0ne
Copy link
Author

The memory footprint kept growing until the app was forcibly restarted when CPU/mem hit the limits (when GC was constantly running I assume), then it would begin to grow again straight away (as indicated by the orange line on our graphs above).

@bnoordhuis
Copy link
Member

What are those limits? Does perf top or perf record work in a docker container? If yes, I'd be curious to see what the profile looks like when it's busy-looping.

@clocked0ne
Copy link
Author

I would like to be able to jump on this and test it using your suggestions but as I am working on an Agile project I will have to try and schedule some time that we can try to do some of these suggestions and/or try to upgrade to the very latest node version v7.7.4 and see if that is any different

@bnoordhuis
Copy link
Member

Okay, since there isn't anything actionable at the moment I'll go ahead and close this out for now. Let me know when you (generic you, not just you you) have more to report or have a way to reproduce and we can revisit. Cheers.

@clocked0ne
Copy link
Author

Is there any material gain to closing the ticket? Surely with higher visibility the more likelihood of other people experiencing the same problem being able to offer their own reproducible steps from their testing?

@bnoordhuis
Copy link
Member

It declutters the issue list. People will still be able to find it but collaborators won't have to sift through non-actionable issues.

@gibfahn
Copy link
Member

gibfahn commented Mar 24, 2017

cc/ @rgrove @coox from https://twitter.com/yaypie/status/838539575605653504, a reproducible testcase would be ideal (less dependencies the better).

@coox
Copy link

coox commented Mar 31, 2017

It appears that the culprit could have been this issue in crypto, which was just fixed in 7.8.0:
#12089

@gibfahn
Copy link
Member

gibfahn commented Mar 31, 2017

@coox yeah that seems likely. In that case this is a duplicate of #12033 (well technically the other way round, but that one has more info).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
memory Issues and PRs related to the memory management or memory footprint.
Projects
None yet
Development

No branches or pull requests

4 participants