-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strange (?) memory footprint #1146
Comments
The heapUsed > RSS is normal, it means part of the process' memory has been swapped out to disk. I don't think I've ever seen heapUsed > heapTotal. That's coming directly from V8 though. |
That said, I did spot a minor bug but unless you've been tweaking --max_old_space_size, it's unlikely that you've been affected by that. See #1148. |
@bnoordhuis I'm using |
Right, if you set it to something > 4 GB, heapUsed and heapTotal can wrap around. #1148 fixes that. |
|
I did a quick check of how V8 calculates the used and total size but I didn't spot anything obvious. It's possible the size of the live objects is overestimated but that's just guesswork. Sorry, I don't think there's anything actionable for us here. |
From my side this is not just curiosity. Sometimes, my workers run into some trouble and began to consume all CPU time, 100%. The process is still able to process incoming requests, write logs (those memory ones) and so, but things get very very slow. Investigating this, I'd noticed that strange (from my point of view) memory behavior. And if I let things go and do not restart, my So, I undertand - @bnoordhuis , thank you. |
@Olegas Happy to help. I'll close the issue now but let me know if you find anything. |
@bnoordhuis I've got another sample of such strange process. One of our worker server become unresponsible (TTFB ~30sec). Worker, which has that response time got "strange memory footprint". Below is some details: RSS: 215.47 MB, Heap Used: 1.91 GB, Heap total: 126.16 MB
System memory status:
No swapping issues! lsof
.heapsnapshot ~= 60Mb (using node-heapdump) This is node 0.10.33 on CentOS. io.js 1.5.1 introduces the same behavior for memory footprint (same app, same flags, same OS) |
Some internal log inspection shows the process is being able to communicate with "upstream" servers with normal response time, but sometimes internal (for this machine) tasks (reading files, working with strings using regexp, etc...) are taking enormous time to complete. |
My app uses requirejs which is using |
@Olegas It's possible but this is not the right bug tracker for issues with node.js v0.10. The vm subsystem in io.js is a from-the-ground-up rewrite of the one in v0.10. |
I have an app which monitors it's memory usage via
process.memoryUsage()
Sometimes, data, reported by this method, looks strange to mee.
I can see process'
heapUsed
>heapTotal
andheapUsed
>RSS
.Is it normal behavior? How can this be explained?
I can see this on 0.10.33 and on io.js 1.5.1 and some versions back.
The text was updated successfully, but these errors were encountered: