-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metric was collected before with the same name and label values #2805
Comments
Can you provide your /etc/fstab and /proc/mounts? |
/proc/mounts
|
@discordianfish can you please help on this ? |
@gnanasalten Can you priovide your textfile in /var/lib/node_exporter/ ? |
node_fstab_mount_status{filesystem="/"} 1 |
Different versions of the fstab collection plugin maybe used. @gnanasalten Can you priovide your textfile script? if you use fstab-check.sh script, mountpoint will appear in the tag.
|
node_exporter shouldn't fail that loudly when two mountpoints have the same path. That is a totally valid to do on linux. |
@SuperSandro2000 It should not but I don't know if this is what is going on here |
A similar problem happened to me. node_exporter starts and issues the following error:
|
And no such directory exists on my system
|
Hrm |
I am also facing same issue, running node_exporter as docker container using command - docker run -d --net="host" --pid="host" -v "/:/host:ro,rslave" -v "/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:ro" quay.io/prometheus/node-exporter:latest --path.rootfs=/host --collector.systemd --collector.tcpstat --collector.meminfo_numa Error -
|
This could be because of time difference.
…On Wed, 7 Aug, 2024, 23:15 Chandra M., ***@***.***> wrote:
I am also facing same issue, running node_exporter as docker container
using command -
*docker run -d --net="host" --pid="host" -v "/:/host:ro,rslave" -v
"/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:ro"
quay.io/prometheus/node-exporter:latest
<http://quay.io/prometheus/node-exporter:latest> --path.rootfs=/host
--collector.systemd --collector.tcpstat --collector.meminfo_numa*
Error -
ts=2024-08-07T17:29:22.159Z caller=stdlib.go:105 level=error msg="error
gathering metrics: 7 error(s) occurred:\n* [from Gatherer #2] collected
metric \"node_filesystem_device_error\" { label:{name:\"device\"
value:\"tmpfs\"} label:{name:\"device_error\" value:\"\"}
label:{name:\"fstype\" value:\"tmpfs\"} label:{name:\"mountpoint\"
value:\"/tmp\"} gauge:{value:0}} was collected before with the same name
and label values\n* [from Gatherer #2] collected metric
\"node_filesystem_readonly\" { label:{name:\"device\" value:\"tmpfs\"}
label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\"
value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"}
gauge:{value:0}} was collected before with the same name and label
values\n* [from Gatherer #2] collected metric
\"node_filesystem_size_bytes\" { label:{name:\"device\" value:\"tmpfs\"}
label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\"
value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"}
gauge:{value:9.68421376e+08}} wascollected before with the same name and
label values\n* [from Gatherer #2] collected metric
\"node_filesystem_free_bytes\" { label:{name:\"device\" value:\"tmpfs\"}
label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\"
value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"}
gauge:{value:9.68421376e+08}} was collected before with the same name and
label values\n* [from Gatherer #2] collected metric
\"node_filesystem_avail_bytes\" { label:{name:\"device\" value:\"tmpfs\"}
label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\"
value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"}
gauge:{value:9.68421376e+08}} was collected before with the same name and
label values\n* [from Gatherer #2] collected metric
\"node_filesystem_files\" { label:{name:\"device\" value:\"tmpfs\"}
label:{name:\"device_error\"value:\"\"} label:{name:\"fstype\"
value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"}
gauge:{value:236431}} was collected before with the same name and label
values\n* [from Gatherer #2] collected metric
\"node_filesystem_files_free\" { label:{name:\"device\" value:\"tmpfs\"}
label:{name:\"device_error\" value:\"\"} label:{name:\"fstype\"
value:\"tmpfs\"} label:{name:\"mountpoint\" value:\"/tmp\"}
gauge:{value:236430}} was collected before with the same name and label
values"
—
Reply to this email directly, view it on GitHub
<#2805 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQRBWJAVGUVQALAMSNIRI4TZQJMNBAVCNFSM6AAAAAA4ZO3PPCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZUGAYDCNZTGA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@gnanasalten Time is same on both host machine and docker container OR you pointing out to some other time difference ? |
Host operating system: output of
uname -a
Linux dc2cpoenrvmd534 3.10.0-1160.66.1.el7.x86_64 #1 SMP Wed May 18 16:02:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
node_exporter version: output of
node_exporter --version
node_exporter, version 1.5.0 (branch: HEAD, revision: 1b48970)
build user: root@6e7732a7b81b
build date: 20221129-18:59:09
go version: go1.19.3
platform: linux/amd64
node_exporter command line flags
/usr/local/bin/node_exporter --collector.systemd --collector.sockstat --collector.filefd --collector.textfile.directory=/var/lib/node_exporter/
node_exporter log output
Sep 15 02:57:37 xxxxxxxxx node_exporter: ts=2023-09-15T02:57:37.684Z caller=stdlib.go:105 level=error msg="error gathering metrics: 17 error(s) occurred:\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/boot" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/boot" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/log" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/log/audit" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/boot" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/home" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/opt" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/tmp" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/tmp" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/dev/shm" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/home" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/log" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/tmp" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/log/audit" > untyped:<value:1 > } was collected before with the same name and label values"
Are you running node_exporter in Docker?
No
What did you do that produced an error?
Scrape from prometheus
What did you expect to see?
No error
What did you see instead?
error
The text was updated successfully, but these errors were encountered: