You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We run node_exporter in Docker/Kubernetes on AWS using AWS EKS. Each of our worker nodes has a root device volume (/dev/sda1) that is 20 GiB large. This device backs the nvme0n1 / /dev/nvme0n1p1 file system:
$ ls -l /dev/sda1
lrwxrwxrwx 1 root root 7 Dec 4 01:36 /dev/sda1 -> nvme0n1
As expected, df shows that the device / filesystem is indeed 20 GiB large:
$ df -hT /
Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme0n1p1 xfs 20G 9.6G 11G 48% /
Given this info, I'd expect node_filesystem_size_bytes and node_filesystem_avail_bytes for both {device="/dev/nvme0n1p1",fstype="xfs",mountpoint="/"} and {device="rootfs",fstype="rootfs",mountpoint="/"} to be 20 GiB and 11 GiB, respectively.
What did you see instead?
The values of the gauges are instead 5.3658783744e+10 and 3.6411342848e+10:
# HELP node_filesystem_avail_bytes Filesystem space available to non-root users in bytes.
# TYPE node_filesystem_avail_bytes gauge
node_filesystem_avail_bytes{device="/dev/nvme0n1p1",fstype="xfs",mountpoint="/"} 3.6411342848e+10
node_filesystem_avail_bytes{device="rootfs",fstype="rootfs",mountpoint="/"} 3.6411342848e+10
# HELP node_filesystem_device_error Whether an error occurred while getting statistics for the given device.
# TYPE node_filesystem_device_error gauge
node_filesystem_device_error{device="/dev/nvme0n1p1",fstype="xfs",mountpoint="/"} 0
node_filesystem_device_error{device="rootfs",fstype="rootfs",mountpoint="/"} 0
# HELP node_filesystem_size_bytes Filesystem size in bytes.
# TYPE node_filesystem_size_bytes gauge
node_filesystem_size_bytes{device="/dev/nvme0n1p1",fstype="xfs",mountpoint="/"} 5.3658783744e+10
node_filesystem_size_bytes{device="rootfs",fstype="rootfs",mountpoint="/"} 5.3658783744e+10
Miscellaneous
I should also note that our worker nodes have several other block device mappings - one of which is a 50 GiB volume. We do a bit of funky setup on the underlying nodes, but I believe the values being reported match those of the /dev/mapper/crypt1 filesystem mounted on /vol/crypt1:
I dug through node_exporter's source code a bit and found that it uses golang.org/x/sys/unix's Statfs function (I think?) to collect the values for both metrics. I haven't yet dug through the function's source code, but could that possibly be the culprit?
Also, I see that there's also an XFS-specific collector that reports metrics XFS runtime stats. I'm not quite sure what each of those metrics represents, but is it possible to use them to determine the size of and amount of space on the /dev/nvme0n1p1 XFS device?
The text was updated successfully, but these errors were encountered:
That seems to have done the trick! I can't believe I missed that. I thought I had accounted for mounting the host filesystem by mounting /proc and /sys to /host/proc and /host/sys, respectively, but supposedly not. Thanks for the fast feedback!
Note: This seems similar to #1675, #1505, and #1339, but each of those issues is closed
Host operating system: output of
uname -a
node_exporter version: output of
node_exporter --version
node_exporter command line flags
Are you running node_exporter in Docker?
Yes. We're running node_exporter in Docker / Kubernetes using DaemonSets:
What did you do that produced an error?
N/A
What did you expect to see?
We run node_exporter in Docker/Kubernetes on AWS using AWS EKS. Each of our worker nodes has a root device volume (/dev/sda1) that is 20 GiB large. This device backs the
nvme0n1
//dev/nvme0n1p1
file system:As expected,
df
shows that the device / filesystem is indeed 20 GiB large:Given this info, I'd expect
node_filesystem_size_bytes
andnode_filesystem_avail_bytes
for both{device="/dev/nvme0n1p1",fstype="xfs",mountpoint="/"}
and{device="rootfs",fstype="rootfs",mountpoint="/"}
to be 20 GiB and 11 GiB, respectively.What did you see instead?
The values of the gauges are instead
5.3658783744e+10
and3.6411342848e+10
:Miscellaneous
I should also note that our worker nodes have several other block device mappings - one of which is a 50 GiB volume. We do a bit of funky setup on the underlying nodes, but I believe the values being reported match those of the
/dev/mapper/crypt1
filesystem mounted on/vol/crypt1
:I dug through node_exporter's source code a bit and found that it uses
golang.org/x/sys/unix
'sStatfs
function (I think?) to collect the values for both metrics. I haven't yet dug through the function's source code, but could that possibly be the culprit?Also, I see that there's also an XFS-specific collector that reports metrics XFS runtime stats. I'm not quite sure what each of those metrics represents, but is it possible to use them to determine the size of and amount of space on the
/dev/nvme0n1p1
XFS device?The text was updated successfully, but these errors were encountered: