Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pm2 memory leak #4134

Closed
songky08 opened this issue Jan 23, 2019 · 19 comments
Closed

pm2 memory leak #4134

songky08 opened this issue Jan 23, 2019 · 19 comments

Comments

@songky08
Copy link

What's going wrong?

Memory of server is slowly increasing and I got out of memory.
I’ve been debugging my server in chrome, the memory periodically keeps objects that seems to histogram every 0.5 ~ 1 second. According to this document, it is supposed to keep datas only last 5 minutes by default, but continuous keep holding objects and slowly increase the memory. I’ve tried turn this metric off, it is default function in pm2 I guess. Pm2 needs extra function to turn metrics off or control period

How could we reproduce this issue?

Just run server with pm2 and debugging with chrome devtool

Supporting information

pm2_memory_leak_issue

--- Daemon -------------------------------------------------
pm2d version         : 3.2.9
node version         : 10.15.0
node path            : not found
argv                 : /usr/local/bin/node,/usr/local/bin/pm2-runtime,start,echosystem.config.js,--env,production
argv0                : node
user                 : user
uid                  : 1001
gid                  : 1001
uptime               : 958min
===============================================================================
--- CLI ----------------------------------------------------
local pm2            : 3.2.9
node version         : 10.15.0
node path            : /usr/local/bin/pm2
argv                 : /usr/local/bin/node,/usr/local/bin/pm2,report
argv0                : node
user                 : user
uid                  : 1001
gid                  : 1001
===============================================================================
--- System info --------------------------------------------
arch                 : x64
platform             : linux
type                 : Linux
cpus                 : Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz
cpus nb              : 2
freemem              : 68087808
totalmem             : 1004294144
home                 : /home/user
===============================================================================
@shimiml4
Copy link

shimiml4 commented Jun 2, 2019

Hi,
We reproduced it too, all our micro services on prod have memory leak too since 3.5.1.
Moreover, there is no package.lock file on this repo so we cannot go back to 3.5.0 to as a working version.
We managed to compare 3.5.1 & 3.5.0 and we can see the memory leak is not coming from the pm2 repo code but from 3rd party code.
This is the diff in dependencies:

image

@shimiml4
Copy link

shimiml4 commented Jun 2, 2019

looks like there was a problem with pm2/[email protected] which fixed on 4.2.2
see keymetrics/pm2-io-apm#252
we verified it on prod and looks like problem solved!

@Unitech
Copy link
Owner

Unitech commented Jun 3, 2019

Great! Closing now

@Unitech Unitech closed this as completed Jun 3, 2019
@shimiml4
Copy link

shimiml4 commented Jun 3, 2019

Great! Closing now

I think you must have package.lock file in order to prevent this kind of issues in the future.

Thanks

@eminoda
Copy link

eminoda commented Sep 6, 2019

@shimiml4 I also got memory leak when use PM2 in 3.5.1,now I downgrade to 3.4.1 which is fine in these days.

@eusonlito
Copy link

eusonlito commented Sep 11, 2019

Same here using version 3.5.1.

With node memory is restored after a image resize process, with pm2 the memory usage still to max after the request finished.

I'm using node 10.16.3 and process to resize images is using https://github.com/lovell/sharp

'use strict';

const sharp = require('sharp');

module.exports = class extends require('./interface') {
    static image(source, target, config) {
        return sharp(source)
            .resize(config.width, config.height, config.options)
            .jpeg(config.quality)
            .toFile(target);
    }
}

source is a file route string. The memory increase on every file processed and never is released.

After 2.000 images processed, memory is 3.8G.

$ pm2 list
┌─────────────────────────────┬────┬─────────┬──────┬────────┬────────┬─────────┬────────┬──────┬────────────┬──────────┬──────────┐
│ App name                    │ id │ version │ mode │ pid    │ status │ restart │ uptime │ cpu  │ mem        │ user     │ watching │
├─────────────────────────────┼────┼─────────┼──────┼────────┼────────┼─────────┼────────┼──────┼────────────┼──────────┼──────────┤
│ XXXXXXXXXXXXXXXXXXXXXXXXXX  │ 13 │ 0.0.1   │ fork │ 68707  │ online │ 2       │ 31m    │ 0.3% │ 3.8 GB     │ www-data │ disabled │
└─────────────────────────────┴────┴─────────┴──────┴────────┴────────┴─────────┴────────┴──────┴────────────┴──────────┴──────────┘

I need to restart pm2 on every image resize batch process.

@eusonlito
Copy link

As @eminoda say, downgrading to 3.4.1 solves the problem.

@himadrinath
Copy link

no its not solve the problem.

@prinze77
Copy link

Any updates ?

@hakimelek
Copy link

Is this still an issue for anyone on v 3.5.1?

@Shogobg
Copy link

Shogobg commented Jan 10, 2021

Still having this issue with 4.5.1

@Albo1125
Copy link

Still having this issue with 4.5.2.

@TheAndroidGuy
Copy link

I have this issue with 5.1.0

@qiulang
Copy link

qiulang commented May 13, 2022

I have this issue with 5.2, please refer to #5145

@TheAndroidGuy
Copy link

Update: I fixed the memory problem by using jemalloc as memory allocator. Here are instructions how to change it.

@qiulang
Copy link

qiulang commented May 15, 2022

@TheAndroidGuy I have 2 servers running, one with pm2 takes a huge amount of memory, the other just runs normally. I really don't think I can make that judgement. And according to this nodejs/node#21973 nodejs rejected to use jemalloc

@OmgImAlexis
Copy link

@qiulang you can still use jemalloc and as the issue you linked to states it's useful for some work loads. All depends on your use case.

@titanism
Copy link

Indeed we determined that pmx (e.g. via @pm2/io) is the culprit for memory leaks. Typically our app runs at 500MB but without explicitly disabling pmx, the app grew to over 3-4GB in memory.

To fix this, simply disable pmx in your ecosystem.json file:

{
  "apps": [
    {
      "name": "your-app-name",
      "script": "app.js",
      "exec_mode": "cluster_mode",
      "wait_ready": true,
      "instances": "max",
+      "pmx": false,
      "env_production": {
        "NODE_ENV": "production"
      }
    },

Then delete and restart all via pm2 delete all and pm2 start ecosystem.json --env production.

References:

@ianharrigan
Copy link

how would we set "pmx: false" via the js api?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests