Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

t5000-valgrind test fails on Jetson Nano #3808

Closed
javawolfpack opened this issue Aug 1, 2021 · 5 comments · Fixed by #3809
Closed

t5000-valgrind test fails on Jetson Nano #3808

javawolfpack opened this issue Aug 1, 2021 · 5 comments · Fixed by #3809

Comments

@javawolfpack
Copy link

So have flux-security & flux-core building and installing fine on the Jetson Nano 4GB model B01. I've tried an official Jetson Ubuntu image (18.04) and a custom image now running Ubuntu 20.04.2 LTS.

Using the manual verbose run method mentioned in #3093 get the following output. This is the only error I got the first time I ran make check; however, now subsequent runs the check hangs on the test python/t0007-watchers which I'm unsure what's causing that or how to run that one manually.

$ flux ./t5000-valgrind.t -d -v
sharness: loading extensions from /home/user/flux-core/t/sharness.d/01-setup.sh
sharness: loading extensions from /home/user/flux-core/t/sharness.d/flux-sharness.sh
expecting success:
	run_timeout 300 \
	flux start -s ${VALGRIND_NBROKERS} \
		--test-exit-timeout=120 \
		--wrap=libtool,e,${VALGRIND} \
		--wrap=--tool=memcheck \
		--wrap=--leak-check=full \
		--wrap=--gen-suppressions=all \
		--wrap=--trace-children=no \
		--wrap=--child-silent-after-fork=yes \
		--wrap=--num-callers=30 \
		--wrap=--leak-resolution=med \
		--wrap=--error-exitcode=1 \
		--wrap=--suppressions=$VALGRIND_SUPPRESSIONS \
		 ${VALGRIND_WORKLOAD}

==1705646== Memcheck, a memory error detector
==1705646== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==1705646== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
==1705646== Command: /home/user/flux-core/src/broker/.libs/flux-broker --setattr=rundir=/tmp/flux-kU6Va1
==1705646==
==1705645== Memcheck, a memory error detector
==1705645== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==1705645== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
==1705645== Command: /home/user/flux-core/src/broker/.libs/flux-broker --setattr=rundir=/tmp/flux-kU6Va1 /home/user/flux-core/t/valgrind/valgrind-workload.sh
==1705645==
==1705645== Syscall param epoll_ctl(event) points to uninitialised byte(s)
==1705645==    at 0x4BDFE38: epoll_ctl (syscall-template.S:78)
==1705645==    by 0x48B37EF: epoll_modify (ev_epoll.c:96)
==1705645==    by 0x48B4F57: fd_reify (ev.c:2166)
==1705645==    by 0x48B4F57: ev_run (ev.c:3677)
==1705645==    by 0x48B4F57: ev_run (ev.c:3623)
==1705645==    by 0x48824FF: flux_reactor_run (reactor.c:126)
==1705645==    by 0x1113BF: main (broker.c:449)
==1705645==  Address 0x1ffefff22c is on thread 1's stack
==1705645==  in frame #1, created by epoll_modify (ev_epoll.c:72)
==1705645==
{
   <insert_a_suppression_name_here>
   Memcheck:Param
   epoll_ctl(event)
   fun:epoll_ctl
   fun:epoll_modify
   fun:fd_reify
   fun:ev_run
   fun:ev_run
   fun:flux_reactor_run
   fun:main
}
==1705646== Syscall param epoll_ctl(event) points to uninitialised byte(s)
==1705646==    at 0x4BDFE38: epoll_ctl (syscall-template.S:78)
==1705646==    by 0x48B37EF: epoll_modify (ev_epoll.c:96)
==1705646==    by 0x48B4F57: fd_reify (ev.c:2166)
==1705646==    by 0x48B4F57: ev_run (ev.c:3677)
==1705646==    by 0x48B4F57: ev_run (ev.c:3623)
==1705646==    by 0x48824FF: flux_reactor_run (reactor.c:126)
==1705646==    by 0x1113BF: main (broker.c:449)
==1705646==  Address 0x1ffefff26c is on thread 1's stack
==1705646==  in frame #1, created by epoll_modify (ev_epoll.c:72)
==1705646==
{
   <insert_a_suppression_name_here>
   Memcheck:Param
   epoll_ctl(event)
   fun:epoll_ctl
   fun:epoll_modify
   fun:fd_reify
   fun:ev_run
   fun:ev_run
   fun:flux_reactor_run
   fun:main
}
FLUX_URI=local:///tmp/flux-kU6Va1/local-0
Running job
f9fiDUJj submitted
f9yunexF submitted
fAMdx8VM submitted
fAsiuZBD submitted
fBNXjbmq submitted
fBrxKq9H submitted
fCdgnAJj submitted
fDPGX3nf submitted
fED7qsLB submitted
fEwrfAWj submitted
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmn
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno
f9fiDUJj complete
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmn
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno
f9yunexF complete
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmn
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno
fAMdx8VM complete
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmn
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno
fAsiuZBD complete
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmn
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno
fBNXjbmq complete
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmn
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno
fBrxKq9H complete
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmn
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno
fCdgnAJj complete
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmn
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno
fDPGX3nf complete
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmn
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno
fED7qsLB complete
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmn
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno
fEwrfAWj complete
Running job-cancel
++ flux jobspec srun sleep 60
++ flux job submit
+ id=fKQ6p4cX
+ flux job wait-event fKQ6p4cX start
1627835184.583431 start
+ flux job cancel fKQ6p4cX
+ flux job wait-event fKQ6p4cX clean
1627835185.672612 clean
Running job-info
++ flux job submit
++ flux jobspec srun -t 1 -n 1 /bin/true
+ id=fLcvwkW3
+ flux job attach fLcvwkW3
+ flux job info fLcvwkW3 eventlog jobspec R
+ flux job list -A
{"id": 746888101888, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835186.3286908, "state": 64, "name": "true", "ntasks": 1, "nnodes": 1, "ranks": "0", "nodelist": "nano1", "expiration": 1627835246.6190383, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835186.3286908, "t_run": 1627835186.9638536, "t_cleanup": 1627835188.3977354, "t_inactive": 1627835188.9547589, "annotations": {"sched": {"resource_summary": "rank0/core0"}}}
{"id": 700398436352, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835183.5584121, "state": 64, "name": "sleep", "ntasks": 1, "nnodes": 1, "ranks": "0", "nodelist": "nano1", "expiration": 0.0, "success": false, "exception_occurred": true, "exception_severity": 0, "exception_type": "cancel", "exception_note": "", "result": 4, "waitstatus": 15, "t_depend": 1627835183.5584121, "t_run": 1627835184.1314242, "t_cleanup": 1627835184.9496472, "t_inactive": 1627835185.6726122, "annotations": {"sched": {"resource_summary": "rank0/core0"}}}
{"id": 530898223104, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835173.4553568, "state": 64, "name": "lptest", "ntasks": 1, "nnodes": 1, "ranks": "1", "nodelist": "nano1", "expiration": 0.0, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835173.4553568, "t_run": 1627835175.2928183, "t_cleanup": 1627835179.3111057, "t_inactive": 1627835180.1624877, "annotations": {"sched": {"resource_summary": "rank1/core1"}}}
{"id": 502846717952, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835171.7833271, "state": 64, "name": "lptest", "ntasks": 1, "nnodes": 1, "ranks": "0", "nodelist": "nano1", "expiration": 0.0, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835171.7833271, "t_run": 1627835173.5692499, "t_cleanup": 1627835179.2315712, "t_inactive": 1627835180.0472727, "annotations": {"sched": {"resource_summary": "rank0/core0"}}}
{"id": 471439769600, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835169.9108646, "state": 64, "name": "lptest", "ntasks": 1, "nnodes": 1, "ranks": "1", "nodelist": "nano1", "expiration": 0.0, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835169.9108646, "t_run": 1627835171.6699777, "t_cleanup": 1627835177.0811205, "t_inactive": 1627835178.2140799, "annotations": {"sched": {"resource_summary": "rank1/core0"}}}
{"id": 442834616320, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835168.2066441, "state": 64, "name": "lptest", "ntasks": 1, "nnodes": 1, "ranks": "0", "nodelist": "nano1", "expiration": 0.0, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835168.2066441, "t_run": 1627835170.143157, "t_cleanup": 1627835176.908953, "t_inactive": 1627835177.9105332, "annotations": {"sched": {"resource_summary": "rank0/core3"}}}
{"id": 413474488320, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835166.456645, "state": 64, "name": "lptest", "ntasks": 1, "nnodes": 1, "ranks": "1", "nodelist": "nano1", "expiration": 0.0, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835166.456645, "t_run": 1627835168.5699501, "t_cleanup": 1627835174.508873, "t_inactive": 1627835176.4670329, "annotations": {"sched": {"resource_summary": "rank1/core2"}}}
{"id": 394818224128, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835165.3436477, "state": 64, "name": "lptest", "ntasks": 1, "nnodes": 1, "ranks": "0", "nodelist": "nano1", "expiration": 0.0, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835165.3436477, "t_run": 1627835166.6062496, "t_cleanup": 1627835173.9754188, "t_inactive": 1627835175.5564713, "annotations": {"sched": {"resource_summary": "rank0/core2"}}}
{"id": 375910301696, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835164.2168148, "state": 64, "name": "lptest", "ntasks": 1, "nnodes": 1, "ranks": "1", "nodelist": "nano1", "expiration": 0.0, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835164.2168148, "t_run": 1627835165.6057014, "t_cleanup": 1627835171.3448763, "t_inactive": 1627835173.2703073, "annotations": {"sched": {"resource_summary": "rank1/core1"}}}
{"id": 356163518464, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835163.0402932, "state": 64, "name": "lptest", "ntasks": 1, "nnodes": 1, "ranks": "0", "nodelist": "nano1", "expiration": 0.0, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835163.0402932, "t_run": 1627835164.3542206, "t_cleanup": 1627835170.6454177, "t_inactive": 1627835172.3150239, "annotations": {"sched": {"resource_summary": "rank0/core1"}}}
{"id": 341902884864, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835162.1900585, "state": 64, "name": "lptest", "ntasks": 1, "nnodes": 1, "ranks": "1", "nodelist": "nano1", "expiration": 0.0, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835162.1900585, "t_run": 1627835163.2179701, "t_cleanup": 1627835168.3095942, "t_inactive": 1627835170.393688, "annotations": {"sched": {"resource_summary": "rank1/core0"}}}
{"id": 329957507072, "userid": 1000, "urgency": 16, "priority": 16, "t_submit": 1627835161.4872923, "state": 64, "name": "lptest", "ntasks": 1, "nnodes": 1, "ranks": "0", "nodelist": "nano1", "expiration": 0.0, "success": true, "exception_occurred": false, "result": 1, "waitstatus": 0, "t_depend": 1627835161.4872923, "t_run": 1627835162.4698594, "t_cleanup": 1627835168.0936608, "t_inactive": 1627835169.7731926, "annotations": {"sched": {"resource_summary": "rank0/core0"}}}
Running job-wait
++ flux mini submit --flags waitable /bin/true
+ id=fNBfynvB
+ flux job wait fNBfynvB
++ flux mini submit --flags waitable /bin/true
+ id=fPHjEm8K
+ flux job wait-event fPHjEm8K clean
1627835194.529114 clean
==1705646==
==1705646== HEAP SUMMARY:
==1705646==     in use at exit: 238,405 bytes in 3,291 blocks
==1705646==   total heap usage: 96,363 allocs, 93,072 frees, 2,901,280,901 bytes allocated
==1705646==
==1705646== LEAK SUMMARY:
==1705646==    definitely lost: 0 bytes in 0 blocks
==1705646==    indirectly lost: 0 bytes in 0 blocks
==1705646==      possibly lost: 0 bytes in 0 blocks
==1705646==    still reachable: 238,229 bytes in 3,289 blocks
==1705646==         suppressed: 176 bytes in 2 blocks
==1705646== Reachable blocks (those to which a pointer was found) are not shown.
==1705646== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==1705646==
==1705646== Use --track-origins=yes to see where uninitialised values come from
==1705646== For lists of detected and suppressed errors, rerun with: -s
==1705646== ERROR SUMMARY: 438 errors from 1 contexts (suppressed: 1 from 1)
flux-start: 1 (pid 1705646) exited with rc=1
==1705645==
==1705645== HEAP SUMMARY:
==1705645==     in use at exit: 251,200 bytes in 3,325 blocks
==1705645==   total heap usage: 892,601 allocs, 889,276 frees, 5,678,169,955 bytes allocated
==1705645==
==1705645== LEAK SUMMARY:
==1705645==    definitely lost: 0 bytes in 0 blocks
==1705645==    indirectly lost: 0 bytes in 0 blocks
==1705645==      possibly lost: 0 bytes in 0 blocks
==1705645==    still reachable: 251,024 bytes in 3,323 blocks
==1705645==         suppressed: 176 bytes in 2 blocks
==1705645== Reachable blocks (those to which a pointer was found) are not shown.
==1705645== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==1705645==
==1705645== Use --track-origins=yes to see where uninitialised values come from
==1705645== For lists of detected and suppressed errors, rerun with: -s
==1705645== ERROR SUMMARY: 3146 errors from 1 contexts (suppressed: 1 from 1)
flux-start: 0 (pid 1705645) exited with rc=1
not ok 1 - valgrind reports no new errors on 2 broker run
#
#		run_timeout 300 \
#		flux start -s ${VALGRIND_NBROKERS} \
#			--test-exit-timeout=120 \
#			--wrap=libtool,e,${VALGRIND} \
#			--wrap=--tool=memcheck \
#			--wrap=--leak-check=full \
#			--wrap=--gen-suppressions=all \
#			--wrap=--trace-children=no \
#			--wrap=--child-silent-after-fork=yes \
#			--wrap=--num-callers=30 \
#			--wrap=--leak-resolution=med \
#			--wrap=--error-exitcode=1 \
#			--wrap=--suppressions=$VALGRIND_SUPPRESSIONS \
#			 ${VALGRIND_WORKLOAD}
#

# failed 1 among 1 test(s)
1..1

Few system details in case they're useful:

Linux version 4.9.201-tegra (buildbrain@mobile-u64-5294-d8000) (gcc version 7.3.1 20180425 [linaro-7.3-2018.05 revision d29120a424ecfbc167ef90065c0eeb7f91977701] (Linaro GCC 7.3-2018.05) ) #1 SMP PREEMPT Fri Feb 19 08:40:32 PST 2021

Python 3.8.10
Valgrind 3.15.0
gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
GNU Make 4.2.1 Built for aarch64-unknown-linux-gnu
Flux-Security v0.4.0
Flux-Core v0.28.0

@garlick
Copy link
Member

garlick commented Aug 1, 2021

Ah thanks for reporting this Bryan. Since the failure is down in libev (which is vendored in flux-core) probably we just need to add the recommended suppression. It's not the first one for libev. Could you verify that this shuts up the failure?

diff --git a/t/valgrind/valgrind.supp b/t/valgrind/valgrind.supp
index 00909e353..d963ab408 100644
--- a/t/valgrind/valgrind.supp
+++ b/t/valgrind/valgrind.supp
@@ -123,3 +123,13 @@
    fun:hwloc_topology_load
    ...
 }
+{
+   <issue_3808>
+   Memcheck:Param
+   epoll_ctl(event)
+   fun:epoll_ctl
+   fun:epoll_modify
+   fun:fd_reify
+   fun:ev_run
+   ...
+}

On the other failure, if you have time, please open another bug. The way to run the python tests standalone is e.g.

$ cd t
$ ../src/cmd/flux python python/t0007-watchers.py
TAP version 13
ok 1 __main__.TestFdWatcher.test_fd_watcher
ok 2 __main__.TestFdWatcher.test_fd_watcher_exception
ok 3 __main__.TestSignal.test_s0_signal_watcher_add
ok 4 __main__.TestSignal.test_s1_signal_watcher_remove
ok 5 __main__.TestSignal.test_signal_watcher
ok 6 __main__.TestSignal.test_signal_watcher_exception
ok 7 __main__.TestSignal.test_signal_watcher_invalid
ok 8 __main__.TestTimer.test_msg_watcher_bytes
ok 9 __main__.TestTimer.test_msg_watcher_unicode
ok 10 __main__.TestTimer.test_s1_0_timer_add
ok 11 __main__.TestTimer.test_s1_1_timer_remove
ok 12 __main__.TestTimer.test_timer_add_negative
ok 13 __main__.TestTimer.test_timer_callback_exception
ok 14 __main__.TestTimer.test_timer_with_reactor
1..14

@javawolfpack
Copy link
Author

Ah thanks for reporting this Bryan. Since the failure is down in libev (which is vendored in flux-core) probably we just need to add the recommended suppression. It's not the first one for libev. Could you verify that this shuts up the failure?

diff --git a/t/valgrind/valgrind.supp b/t/valgrind/valgrind.supp

I get this when running that command:

$ diff --git a/t/valgrind/valgrind.supp b/t/valgrind/valgrind.supp
diff: unrecognized option '--git'
diff: Try 'diff --help' for more information

Everything seems to indicate that I should run the following after committing the state of my code and it does that:

$ git diff t/valgrind/valgrind.supp
$

But it yields no output. Is the suppression beyond v0.28.0 of flux-core?

@javawolfpack
Copy link
Author

I'm going to redownload and build a fresh copy of v0.28.0 and see if the tests hang again before submitting that ticket. Running the python test manually ran fine and I get that same output.

@garlick
Copy link
Member

garlick commented Aug 1, 2021

Oops, sorry, that's a patch you could cut and paste into a file, say supp.diff, then run

$ patch -p1 <supp.diff

from the top level of the flux-core source tree.

@javawolfpack
Copy link
Author

Patch applied, now shows this in the output:

...
make[2]: Nothing to be done for 't5000-valgrind.t'.
...
PASS: t5000-valgrind.t 1 - valgrind reports no new errors on 2 broker run
...

So guess that's a success in squashing the failure for that test. Thanks, will let you know if it still hangs again and submit a new ticket if it does.

garlick added a commit to garlick/flux-core that referenced this issue Aug 1, 2021
Problem: a new valgrind test failure was encountered on aarch64,
Ubuntu 20.04.2 LTS and also the official Jetson Ubuntu 18.04:

==1705645== Syscall param epoll_ctl(event) points to uninitialised byte(s)
==1705645==    at 0x4BDFE38: epoll_ctl (syscall-template.S:78)
==1705645==    by 0x48B37EF: epoll_modify (ev_epoll.c:96)
==1705645==    by 0x48B4F57: fd_reify (ev.c:2166)
==1705645==    by 0x48B4F57: ev_run (ev.c:3677)
==1705645==    by 0x48B4F57: ev_run (ev.c:3623)
==1705645==    by 0x48824FF: flux_reactor_run (reactor.c:126)
==1705645==    by 0x1113BF: main (broker.c:449)
==1705645==  Address 0x1ffefff22c is on thread 1's stack
==1705645==  in frame #1, created by epoll_modify (ev_epoll.c:72)

Since this is apparently internal to libev, add a suppression.

Fixes flux-framework#3808
chu11 pushed a commit to chu11/flux-core that referenced this issue Sep 28, 2021
Problem: a new valgrind test failure was encountered on aarch64,
Ubuntu 20.04.2 LTS and also the official Jetson Ubuntu 18.04:

==1705645== Syscall param epoll_ctl(event) points to uninitialised byte(s)
==1705645==    at 0x4BDFE38: epoll_ctl (syscall-template.S:78)
==1705645==    by 0x48B37EF: epoll_modify (ev_epoll.c:96)
==1705645==    by 0x48B4F57: fd_reify (ev.c:2166)
==1705645==    by 0x48B4F57: ev_run (ev.c:3677)
==1705645==    by 0x48B4F57: ev_run (ev.c:3623)
==1705645==    by 0x48824FF: flux_reactor_run (reactor.c:126)
==1705645==    by 0x1113BF: main (broker.c:449)
==1705645==  Address 0x1ffefff22c is on thread 1's stack
==1705645==  in frame flux-framework#1, created by epoll_modify (ev_epoll.c:72)

Since this is apparently internal to libev, add a suppression.

Fixes flux-framework#3808
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants