Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load exposed incorrectly in LXC (Likely due to libuv) #33791

Closed
lucasRolff opened this issue Jun 8, 2020 · 1 comment · Fixed by libuv/libuv#2876
Closed

Load exposed incorrectly in LXC (Likely due to libuv) #33791

lucasRolff opened this issue Jun 8, 2020 · 1 comment · Fixed by libuv/libuv#2876
Labels
libuv Issues and PRs related to the libuv dependency or the uv binding. linux Issues and PRs related to the Linux platform.

Comments

@lucasRolff
Copy link

  • Version: v12.16.3
  • Platform: Ubuntu 18.04 - 5.0.21-5-pve
  • Subsystem: os

What steps will reproduce the bug?

Spawn LXC container, load reported in the container (via top or htop) will be correct, while NodeJS reports the hypervisor load:

# uptime; node load.js
 09:18:04 up 40 days, 22:06,  3 users,  load average: 0.15, 0.32, 0.43
[ 20.83984375, 14.32861328125, 12.10791015625 ]

Code example:

const os = require( 'os' );

var os_loadavg = os.loadavg();

console.log(os_loadavg);

How often does it reproduce? Is there a required condition?

Constantly

What is the expected behavior?

I'd expect the load to be reported correctly

What do you see instead?

Load from hypervisor exposed

Additional information

Environment is LXC, I'd assume it's due to libuv gets the actual information from the hypervisor, and not the container.

Maybe a similar approach to #27170 should be implemented, and allow a fallback to /proc/loadavg if available.

It's quite common to run applications within LXC, thus it would be nice the os package returns the correct information as seen by the container itself.

In my specific case, the load on the system is detected, and it will halt the application until load drops - since this is the wrong load returned (higher than the defined limit) the application halts more frequently than it should.

bnoordhuis added a commit to bnoordhuis/libuv that referenced this issue Jun 8, 2020
It was reported that uv_loadavg() reports the wrong values inside an
lxc container.

Libuv calls sysinfo(2) but that isn't intercepted by lxc. /proc/loadavg
however is because /proc is a FUSE fs inside the container.

This commit makes libuv try /proc/loadavg first and fall back to
sysinfo(2) in case /proc isn't mounted.

This commit is very similar to commit 3a1be72 ("linux: read free/total
memory from /proc/meminfo") from April 2019.

Fixes: nodejs/node#33791
@bnoordhuis
Copy link
Member

It's indeed the same issue as #27170: libuv uses sysinfo(2) to obtain the load average, which LXC doesn't intercept/emulate.

I'm not going to bend libuv much further in order to make it work with LXC but I've opened libuv/libuv#2876 because it's a fairly straightforward fix.

@bnoordhuis bnoordhuis added libuv Issues and PRs related to the libuv dependency or the uv binding. linux Issues and PRs related to the Linux platform. labels Jun 8, 2020
JeffroMF pushed a commit to JeffroMF/libuv that referenced this issue May 16, 2022
It was reported that uv_loadavg() reports the wrong values inside an
lxc container.

Libuv calls sysinfo(2) but that isn't intercepted by lxc. /proc/loadavg
however is because /proc is a FUSE fs inside the container.

This commit makes libuv try /proc/loadavg first and fall back to
sysinfo(2) in case /proc isn't mounted.

This commit is very similar to commit 3a1be72 ("linux: read free/total
memory from /proc/meminfo") from April 2019.

Fixes: nodejs/node#33791
PR-URL: libuv#2876
Reviewed-By: Colin Ihrig <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
libuv Issues and PRs related to the libuv dependency or the uv binding. linux Issues and PRs related to the Linux platform.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants