Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

splitting listpees: closing tests error 1 #1

Closed
vincenzopalazzo opened this issue Aug 25, 2022 · 1 comment
Closed

splitting listpees: closing tests error 1 #1

vincenzopalazzo opened this issue Aug 25, 2022 · 1 comment
Labels
bug Something isn't working

Comments

@vincenzopalazzo
Copy link
Owner

Running the following command

make -j$(nproc) check DEVELOPER=1 EXPERIMENTAL_DUAL_FUND=0 VALGRIND=0 PYTEST_OPTS="-k test_segwit_shutdown_script --log-cli-level=INFO"

Receiving the following error

lightningd-36 2022-08-25T18:12:48.045Z INFO    0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#1: Peer transient failure in CHANNELD_AWAITING_LOCKIN: channeld: Owning subdaemon channeld died (62208)
lightningd-33 2022-08-25T18:12:48.111Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-27 2022-08-25T18:12:48.155Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-34 2022-08-25T18:12:48.159Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-26 2022-08-25T18:12:48.175Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-21 2022-08-25T18:12:48.351Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-2 2022-08-25T18:12:48.404Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-3 2022-08-25T18:12:48.408Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-22 2022-08-25T18:12:48.411Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-7 2022-08-25T18:12:48.859Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-9 2022-08-25T18:12:48.919Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-8 2022-08-25T18:12:48.959Z DEBUG   gossipd: seeker: no peers, waiting
lightningd-1 2022-08-25T18:12:49.045Z DEBUG   02cea6e033525fe7704dd66c132ef2148b3ce6fcf853a4996a36096b3657749f62-channeld-chan#3: Status closed, but not exited. Killing
lightningd-1 2022-08-25T18:12:49.045Z INFO    02cea6e033525fe7704dd66c132ef2148b3ce6fcf853a4996a36096b3657749f62-chan#3: Peer transient failure in CHANNELD_AWAITING_LOCKIN: channeld: Owning subdaemon channeld died (9)
lightningd-1 2022-08-25T18:12:49.048Z DEBUG   plugin-spenderp: mfc 34: signpsbt done.
lightningd-1 2022-08-25T18:12:49.048Z DEBUG   plugin-spenderp: mfc 34: sendpsbt.
lightningd-1 2022-08-25T18:12:49.054Z DEBUG   plugin-autoclean: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.054Z DEBUG   plugin-chanbackup: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.054Z DEBUG   plugin-bcli: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.054Z DEBUG   plugin-commando: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.055Z DEBUG   plugin-funder: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.055Z DEBUG   plugin-topology: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.055Z DEBUG   plugin-keysend: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.055Z DEBUG   plugin-spenderp: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.055Z DEBUG   lightningd: Command returned result after jcon close
lightningd-1 2022-08-25T18:12:49.055Z DEBUG   plugin-offers: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.055Z DEBUG   plugin-txprepare: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.056Z DEBUG   plugin-pay: Killing plugin: exited during normal operation
lightningd-1 2022-08-25T18:12:49.056Z DEBUG   lightningd: io_break: destroy_plugin
----------------------------------------------------------- Captured stderr call -----------------------------------------------------------
bookkeeper: FATAL SIGNAL 11 (version v0.12.0-17-g149cf25)
0x5590f0881a2a send_backtrace
	common/daemon.c:33
0x5590f0881ad4 crashdump
	common/daemon.c:46
0x7f4a8400908f ???
	/build/glibc-SzIz7B/glibc-2.31/signal/../sysdeps/unix/sysv/linux/x86_64/sigaction.c:0
0x5590f085a1cc new_missed_channel_account
	plugins/bkpr/bookkeeper.c:637
0x5590f085adec listpeers_multi_done
	plugins/bkpr/bookkeeper.c:880
0x5590f0868845 handle_rpc_reply
	plugins/libplugin.c:695
0x5590f0869325 rpc_read_response_one
	plugins/libplugin.c:868
0x5590f086945b rpc_conn_read_response
	plugins/libplugin.c:888
0x5590f090e479 next_plan
	ccan/ccan/io/io.c:59
0x5590f090f081 do_plan
	ccan/ccan/io/io.c:407
0x5590f090f0c3 io_ready
	ccan/ccan/io/io.c:417
0x5590f09113b6 io_loop
	ccan/ccan/io/poll.c:453
0x5590f086c107 plugin_main
	plugins/libplugin.c:1710
0x5590f085d9cf main
	plugins/bkpr/bookkeeper.c:1869
0x7f4a83fea082 __libc_start_main
	../csu/libc-start.c:308
0x5590f085634d ???
	???:0
0xffffffffffffffff ???
	???:0
bookkeeper: FATAL SIGNAL 11 (version v0.12.0-17-g149cf25)
0x558ed3434a2a send_backtrace
	common/daemon.c:33
0x558ed3434ad4 crashdump
	common/daemon.c:46
0x7fe3e6fdc08f ???
	/build/glibc-SzIz7B/glibc-2.31/signal/../sysdeps/unix/sysv/linux/x86_64/sigaction.c:0
0x558ed340d1cc new_missed_channel_account
	plugins/bkpr/bookkeeper.c:637
0x558ed340ddec listpeers_multi_done
	plugins/bkpr/bookkeeper.c:880
0x558ed341b845 handle_rpc_reply
	plugins/libplugin.c:695
0x558ed341c325 rpc_read_response_one
	plugins/libplugin.c:868
0x558ed341c45b rpc_conn_read_response
	plugins/libplugin.c:888
0x558ed34c1479 next_plan
	ccan/ccan/io/io.c:59
0x558ed34c2081 do_plan
	ccan/ccan/io/io.c:407
0x558ed34c20c3 io_ready
	ccan/ccan/io/io.c:417
0x558ed34c43b6 io_loop
	ccan/ccan/io/poll.c:453
0x558ed341f107 plugin_main
	plugins/libplugin.c:1710
0x558ed34109cf main
	plugins/bkpr/bookkeeper.c:1869
0x7fe3e6fbd082 __libc_start_main
	../csu/libc-start.c:308
0x558ed340934d ???
	???:0
0xffffffffffffffff ???
	???:0

@vincenzopalazzo vincenzopalazzo changed the title splitting listpees: closing tests error splitting listpees: closing tests error 1 Aug 25, 2022
@vincenzopalazzo vincenzopalazzo added the bug Something isn't working label Aug 25, 2022
@vincenzopalazzo
Copy link
Owner Author

Fixed 9cd6f10

vincenzopalazzo added a commit that referenced this issue Feb 13, 2023
This will fix a crash that I caused on armv7
and by looking inside the coredump with gdb
(by adding an assert on n that must be
different from null) I get the following stacktrace

```
(gdb) bt
\#0  0x00000000 in ?? ()
\#1  0x0043a038 in send_backtrace (why=0xbe9e3600 "FATAL SIGNAL 11") at common/daemon.c:36
\#2  0x0043a0ec in crashdump (sig=11) at common/daemon.c:46
\#3  <signal handler called>
\#4  0x00406d04 in node_announcement (map=0x938ecc, nann_off=495146) at common/gossmap.c:586
\#5  0x00406fec in map_catchup (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:643
\#6  0x004073a4 in load_gossip_store (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:697
\ElementsProject#7  0x00408244 in gossmap_load (ctx=0x0, filename=0x4e16b8 "gossip_store", num_channel_updates_rejected=0xbe9e3a40) at common/gossmap.c:976
\ElementsProject#8  0x0041a548 in init (p=0x93831c, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., config=0x939cdc) at plugins/topology.c:622
\ElementsProject#9  0x0041e5d0 in handle_init (cmd=0x938934, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., params=0x939c8c)
    at plugins/libplugin.c:1208
\ElementsProject#10 0x0041fc04 in ld_command_handle (plugin=0x93831c, toks=0x939bec) at plugins/libplugin.c:1572
\ElementsProject#11 0x00420050 in ld_read_json_one (plugin=0x93831c) at plugins/libplugin.c:1667
\ElementsProject#12 0x004201bc in ld_read_json (conn=0x9391c4, plugin=0x93831c) at plugins/libplugin.c:1687
\ElementsProject#13 0x004cb82c in next_plan (conn=0x9391c4, plan=0x9391d8) at ccan/ccan/io/io.c:59
\ElementsProject#14 0x004cc67c in do_plan (conn=0x9391c4, plan=0x9391d8, idle_on_epipe=false) at ccan/ccan/io/io.c:407
\ElementsProject#15 0x004cc6dc in io_ready (conn=0x9391c4, pollflags=1) at ccan/ccan/io/io.c:417
\ElementsProject#16 0x004cf8cc in io_loop (timers=0x9383c4, expired=0xbe9e3ce4) at ccan/ccan/io/poll.c:453
\ElementsProject#17 0x00420af4 in plugin_main (argv=0xbe9e3eb4, init=0x41a46c <init>, restartability=PLUGIN_STATIC, init_rpc=true, features=0x0, commands=0x6167e8 <commands>, num_commands=4, notif_subs=0x0, num_notif_subs=0, hook_subs=0x0, num_hook_subs=0, notif_topics=0x0, num_notif_topics=0) at plugins/libplugin.c:1891
\ElementsProject#18 0x0041a6f8 in main (argc=1, argv=0xbe9e3eb4) at plugins/topology.c:679
```

I do not know if this is a solution because I do not know
when I can parse a node announcement for a node that
it is not longer in the gossip map.

So, I hope this is just usefult for @rustyrussell

Changelog-Fixes: fixes `FATAL SIGNAL 11` on gossmap node announcement parsing.

Signed-off-by: Vincenzo Palazzo <[email protected]>
vincenzopalazzo added a commit that referenced this issue Feb 13, 2023
This will fix a crash that I caused on armv7
and by looking inside the coredump with gdb
(by adding an assert on n that must be
different from null) I get the following stacktrace

```
(gdb) bt
\#0  0x00000000 in ?? ()
\#1  0x0043a038 in send_backtrace (why=0xbe9e3600 "FATAL SIGNAL 11") at common/daemon.c:36
\#2  0x0043a0ec in crashdump (sig=11) at common/daemon.c:46
\#3  <signal handler called>
\#4  0x00406d04 in node_announcement (map=0x938ecc, nann_off=495146) at common/gossmap.c:586
\#5  0x00406fec in map_catchup (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:643
\#6  0x004073a4 in load_gossip_store (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:697
\ElementsProject#7  0x00408244 in gossmap_load (ctx=0x0, filename=0x4e16b8 "gossip_store", num_channel_updates_rejected=0xbe9e3a40) at common/gossmap.c:976
\ElementsProject#8  0x0041a548 in init (p=0x93831c, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., config=0x939cdc) at plugins/topology.c:622
\ElementsProject#9  0x0041e5d0 in handle_init (cmd=0x938934, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., params=0x939c8c)
    at plugins/libplugin.c:1208
\ElementsProject#10 0x0041fc04 in ld_command_handle (plugin=0x93831c, toks=0x939bec) at plugins/libplugin.c:1572
\ElementsProject#11 0x00420050 in ld_read_json_one (plugin=0x93831c) at plugins/libplugin.c:1667
\ElementsProject#12 0x004201bc in ld_read_json (conn=0x9391c4, plugin=0x93831c) at plugins/libplugin.c:1687
\ElementsProject#13 0x004cb82c in next_plan (conn=0x9391c4, plan=0x9391d8) at ccan/ccan/io/io.c:59
\ElementsProject#14 0x004cc67c in do_plan (conn=0x9391c4, plan=0x9391d8, idle_on_epipe=false) at ccan/ccan/io/io.c:407
\ElementsProject#15 0x004cc6dc in io_ready (conn=0x9391c4, pollflags=1) at ccan/ccan/io/io.c:417
\ElementsProject#16 0x004cf8cc in io_loop (timers=0x9383c4, expired=0xbe9e3ce4) at ccan/ccan/io/poll.c:453
\ElementsProject#17 0x00420af4 in plugin_main (argv=0xbe9e3eb4, init=0x41a46c <init>, restartability=PLUGIN_STATIC, init_rpc=true, features=0x0, commands=0x6167e8 <commands>, num_commands=4, notif_subs=0x0, num_notif_subs=0, hook_subs=0x0, num_hook_subs=0, notif_topics=0x0, num_notif_topics=0) at plugins/libplugin.c:1891
\ElementsProject#18 0x0041a6f8 in main (argc=1, argv=0xbe9e3eb4) at plugins/topology.c:679
```

I do not know if this is a solution because I do not know
when I can parse a node announcement for a node that
it is not longer in the gossip map.

So, I hope this is just usefult for @rustyrussell

Changelog-Fix: fixes `FATAL SIGNAL 11` on gossmap node announcement parsing.

Signed-off-by: Vincenzo Palazzo <[email protected]>
vincenzopalazzo added a commit that referenced this issue Feb 13, 2023
This will fix a crash that I caused on armv7
and by looking inside the coredump with gdb
(by adding an assert on n that must be
different from null) I get the following stacktrace

```
(gdb) bt
\#0  0x00000000 in ?? ()
\#1  0x0043a038 in send_backtrace (why=0xbe9e3600 "FATAL SIGNAL 11") at common/daemon.c:36
\#2  0x0043a0ec in crashdump (sig=11) at common/daemon.c:46
\#3  <signal handler called>
\#4  0x00406d04 in node_announcement (map=0x938ecc, nann_off=495146) at common/gossmap.c:586
\#5  0x00406fec in map_catchup (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:643
\#6  0x004073a4 in load_gossip_store (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:697
\ElementsProject#7  0x00408244 in gossmap_load (ctx=0x0, filename=0x4e16b8 "gossip_store", num_channel_updates_rejected=0xbe9e3a40) at common/gossmap.c:976
\ElementsProject#8  0x0041a548 in init (p=0x93831c, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., config=0x939cdc) at plugins/topology.c:622
\ElementsProject#9  0x0041e5d0 in handle_init (cmd=0x938934, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., params=0x939c8c)
    at plugins/libplugin.c:1208
\ElementsProject#10 0x0041fc04 in ld_command_handle (plugin=0x93831c, toks=0x939bec) at plugins/libplugin.c:1572
\ElementsProject#11 0x00420050 in ld_read_json_one (plugin=0x93831c) at plugins/libplugin.c:1667
\ElementsProject#12 0x004201bc in ld_read_json (conn=0x9391c4, plugin=0x93831c) at plugins/libplugin.c:1687
\ElementsProject#13 0x004cb82c in next_plan (conn=0x9391c4, plan=0x9391d8) at ccan/ccan/io/io.c:59
\ElementsProject#14 0x004cc67c in do_plan (conn=0x9391c4, plan=0x9391d8, idle_on_epipe=false) at ccan/ccan/io/io.c:407
\ElementsProject#15 0x004cc6dc in io_ready (conn=0x9391c4, pollflags=1) at ccan/ccan/io/io.c:417
\ElementsProject#16 0x004cf8cc in io_loop (timers=0x9383c4, expired=0xbe9e3ce4) at ccan/ccan/io/poll.c:453
\ElementsProject#17 0x00420af4 in plugin_main (argv=0xbe9e3eb4, init=0x41a46c <init>, restartability=PLUGIN_STATIC, init_rpc=true, features=0x0, commands=0x6167e8 <commands>, num_commands=4, notif_subs=0x0, num_notif_subs=0, hook_subs=0x0, num_hook_subs=0, notif_topics=0x0, num_notif_topics=0) at plugins/libplugin.c:1891
\ElementsProject#18 0x0041a6f8 in main (argc=1, argv=0xbe9e3eb4) at plugins/topology.c:679
```

I do not know if this is a solution because I do not know
when I can parse a node announcement for a node that
it is not longer in the gossip map.

So, I hope this is just usefult for @rustyrussell

Changelog-Fixed: fixes `FATAL SIGNAL 11` on gossmap node announcement parsing.

Signed-off-by: Vincenzo Palazzo <[email protected]>
vincenzopalazzo added a commit that referenced this issue Feb 22, 2023
This will fix a crash that I caused on armv7
and by looking inside the coredump with gdb
(by adding an assert on n that must be
different from null) I get the following stacktrace

```
(gdb) bt
\#0  0x00000000 in ?? ()
\#1  0x0043a038 in send_backtrace (why=0xbe9e3600 "FATAL SIGNAL 11") at common/daemon.c:36
\#2  0x0043a0ec in crashdump (sig=11) at common/daemon.c:46
\#3  <signal handler called>
\#4  0x00406d04 in node_announcement (map=0x938ecc, nann_off=495146) at common/gossmap.c:586
\#5  0x00406fec in map_catchup (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:643
\#6  0x004073a4 in load_gossip_store (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:697
\ElementsProject#7  0x00408244 in gossmap_load (ctx=0x0, filename=0x4e16b8 "gossip_store", num_channel_updates_rejected=0xbe9e3a40) at common/gossmap.c:976
\ElementsProject#8  0x0041a548 in init (p=0x93831c, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., config=0x939cdc) at plugins/topology.c:622
\ElementsProject#9  0x0041e5d0 in handle_init (cmd=0x938934, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., params=0x939c8c)
    at plugins/libplugin.c:1208
\ElementsProject#10 0x0041fc04 in ld_command_handle (plugin=0x93831c, toks=0x939bec) at plugins/libplugin.c:1572
\ElementsProject#11 0x00420050 in ld_read_json_one (plugin=0x93831c) at plugins/libplugin.c:1667
\ElementsProject#12 0x004201bc in ld_read_json (conn=0x9391c4, plugin=0x93831c) at plugins/libplugin.c:1687
\ElementsProject#13 0x004cb82c in next_plan (conn=0x9391c4, plan=0x9391d8) at ccan/ccan/io/io.c:59
\ElementsProject#14 0x004cc67c in do_plan (conn=0x9391c4, plan=0x9391d8, idle_on_epipe=false) at ccan/ccan/io/io.c:407
\ElementsProject#15 0x004cc6dc in io_ready (conn=0x9391c4, pollflags=1) at ccan/ccan/io/io.c:417
\ElementsProject#16 0x004cf8cc in io_loop (timers=0x9383c4, expired=0xbe9e3ce4) at ccan/ccan/io/poll.c:453
\ElementsProject#17 0x00420af4 in plugin_main (argv=0xbe9e3eb4, init=0x41a46c <init>, restartability=PLUGIN_STATIC, init_rpc=true, features=0x0, commands=0x6167e8 <commands>, num_commands=4, notif_subs=0x0, num_notif_subs=0, hook_subs=0x0, num_hook_subs=0, notif_topics=0x0, num_notif_topics=0) at plugins/libplugin.c:1891
\ElementsProject#18 0x0041a6f8 in main (argc=1, argv=0xbe9e3eb4) at plugins/topology.c:679
```

I do not know if this is a solution because I do not know
when I can parse a node announcement for a node that
it is not longer in the gossip map.

So, I hope this is just usefult for @rustyrussell

Changelog-Fixed: fixes `FATAL SIGNAL 11` on gossmap node announcement parsing.

Signed-off-by: Vincenzo Palazzo <[email protected]>
vincenzopalazzo pushed a commit that referenced this issue Mar 23, 2023
The issue is that common_setup() wasn't called by the fuzz target,
leaving secp256k1_ctx as NULL.

UBSan error:

$ UBSAN_OPTIONS="print_stacktrace=1:halt_on_error=1" \
    ./fuzz-channel_id crash-1575b41ef09e62e4c09c165e6dc037a110b113f2

INFO: Running with entropic power schedule (0xFF, 100).
INFO: Seed: 1153355603
INFO: Loaded 1 modules   (25915 inline 8-bit counters): 25915 [0x563bae7ac3a8, 0x563bae7b28e3),
INFO: Loaded 1 PC tables (25915 PCs): 25915 [0x563bae7b28e8,0x563bae817c98),
./fuzz-channel_id: Running 1 inputs 1 time(s) each.
Running: crash-1575b41ef09e62e4c09c165e6dc037a110b113f2
bitcoin/pubkey.c:22:33: runtime error: null pointer passed as argument 1, which is declared to never be null
external/libwally-core/src/secp256k1/include/secp256k1.h:373:3: note: nonnull attribute specified here
    #0 0x563bae41e3db in pubkey_from_der bitcoin/pubkey.c:19:7
    #1 0x563bae4205e0 in fromwire_pubkey bitcoin/pubkey.c:111:7
    #2 0x563bae46437c in run tests/fuzz/fuzz-channel_id.c:42:3
    #3 0x563bae2f6016 in LLVMFuzzerTestOneInput tests/fuzz/libfuzz.c:23:2
    #4 0x563bae20a450 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long)
    #5 0x563bae1f4c3f in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long)
    #6 0x563bae1fa6e6 in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long))
    ElementsProject#7 0x563bae223052 in main (tests/fuzz/fuzz-channel_id+0x181052) (BuildId: f7f56e14ffc06df54ab732d79ea922e773de1f25)
    ElementsProject#8 0x7fa7fa113082 in __libc_start_main
    ElementsProject#9 0x563bae1efbdd in _start

SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior bitcoin/pubkey.c:22:33 in
vincenzopalazzo pushed a commit that referenced this issue Jun 7, 2023
Detected by UBSan:

$ UBSAN_OPTIONS=print_stacktrace=1 ./wallet/test/run-psbt_fixup

bitcoin/psbt.c:733:2: runtime error: applying zero offset to null pointer
    #0 0x53c829 in psbt_from_bytes lightning/bitcoin/psbt.c:733:2
    #1 0x5adcb0 in main lightning/wallet/test/run-psbt_fixup.c:174:10

SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior bitcoin/psbt.c:733:2
vincenzopalazzo pushed a commit that referenced this issue Jun 7, 2023
The function is tiny and was only used in one location. And that one
location was leaking memory.

Detected by ASan:

==2637667==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 7 byte(s) in 1 object(s) allocated from:
    #0 0x4cd758 in __interceptor_strdup
    #1 0x64c70c in json_stream_log_suppress_for_cmd lightning/lightningd/jsonrpc.c:597:31
    #2 0x68a630 in json_getlog lightning/lightningd/log.c:974:2
    ...

SUMMARY: AddressSanitizer: 7 byte(s) leaked in 1 allocation(s).
vincenzopalazzo pushed a commit that referenced this issue Jun 7, 2023
It is possible for db_column_bytes() to return 0 and for
db_column_blob() to return NULL even when db_column_is_null() returns
false. We need to short circuit in this case.

Detected by UBSan:

  db/bindings.c:479:12: runtime error: null pointer passed as argument 2, which is declared to never be null
  /usr/include/string.h:44:28: note: nonnull attribute specified here

  #0 0x95f117 in db_col_arr_ db/bindings.c:479:2
  #1 0x95ef85 in db_col_channel_type db/bindings.c:459:32
  #2 0x852c03 in wallet_stmt2channel wallet/wallet.c:1483:9
  #3 0x81f396 in wallet_channels_load_active wallet/wallet.c:1749:23
  #4 0x81f03d in wallet_init_channels wallet/wallet.c:1765:9
  #5 0x72f1f9 in load_channels_from_wallet lightningd/peer_control.c:2257:7
  #6 0x672856 in main lightningd/lightningd.c:1121:25
vincenzopalazzo pushed a commit that referenced this issue Jun 7, 2023
Fixes nullability errors detected by UBSan:

wire/fromwire.c:173:46: runtime error: null pointer passed as argument 1, which is declared to never be null
external/libwally-core/src/secp256k1/include/secp256k1.h:432:3: note: nonnull attribute specified here
    #0 0x65214a in fromwire_secp256k1_ecdsa_signature wire/fromwire.c:173:6
    #1 0x659500 in printwire_secp256k1_ecdsa_signature devtools/print_wire.c:331:1
    #2 0x646ba2 in printwire_channel_update wire/peer_printgen.c:1900:7
    #3 0x637182 in printpeer_wire_message wire/peer_printgen.c:128:11
    #4 0x65a097 in main devtools/decodemsg.c:85:10
vincenzopalazzo pushed a commit that referenced this issue Jun 7, 2023
Memory leak detected by ASan:

==880002==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 32816 byte(s) in 1 object(s) allocated from:
    #0 0x5039e7 in malloc (lightningd/lightningd+0x5039e7)
    #1 0x7f2e8c203884 in __alloc_dir (/lib64/libc.so.6+0xd2884)
vincenzopalazzo pushed a commit that referenced this issue Jun 13, 2023
Detected by ASan in test_hsmtool_generatehsm:

==58698==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 120 byte(s) in 1 object(s) allocated from:
    #0 0x4e6247 in malloc
    #1 0x7f078452d672 in getdelim

SUMMARY: AddressSanitizer: 120 byte(s) leaked in 1 allocation(s).
vincenzopalazzo pushed a commit that referenced this issue Aug 13, 2023
These show that we should clean up our notes.  Here's the result from test_hardmpp:

# we have computed a set of 1 flows with probability 0.328, fees 0msat and delay 23
# No MPP, so added 0msat shadow fee
# Shadow route on flow 0/1 added 0 block delay. now 5
# sendpay flow groupid=1, partid=1, delivering=1800000000msat, probability=0.328
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1799999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.115, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=3, delivering=500000000msat, probability=0.475
# sendpay flow groupid=1, partid=2, delivering=1300000000msat, probability=0.242
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1299999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.084, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=5, delivering=260000000msat, probability=0.467
# sendpay flow groupid=1, partid=4, delivering=1040000000msat, probability=0.179
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1039999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.052, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=7, delivering=120000000msat, probability=0.494
# sendpay flow groupid=1, partid=6, delivering=920000000msat, probability=0.105

Ideally it would look something like:

# Computed 1 flows, probability=0.328:
#  Flow 1: 103x2x0 1800000000msat fee=0msat probability=0.328 shadow=+0msat/0blocks
#  Flow 1: FAIL: TEMPORARY_CHANNEL_FAILURE for 103x2x0.
# Computed 2 flows, probability=0.115:
#  Flow 2: XXX->XXX 1300000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 3: XXX->XXX 500000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 2: FAIL: TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0
# Computed 2 flows (3 total), probability=0.084, fee=0msat, delay=23
...
#  Flow 4: SUCCESS, 2 in progress should succeed soon.

Signed-off-by: Rusty Russell <[email protected]>
vincenzopalazzo pushed a commit that referenced this issue Aug 24, 2023
It now looks like (for test_hardmpp):

```
# we have computed a set of 1 flows with probability 0.328, fees 0msat and delay 23
#   Flow 1: amount=1800000000msat prob=0.328 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0->-103x3x0/1->
#   Flow 1: Failed at node #1 (WIRE_TEMPORARY_CHANNEL_FAILURE): failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
#   Flow 1: Failure of 1800000000msat for 103x5x0/0 capacity [0msat,3000000000msat] -> [0msat,1799999999msat]
# we have computed a set of 2 flows with probability 0.115, fees 0msat and delay 23
#   Flow 2: amount=500000000msat prob=0.475 fees=0msat delay=12 path=-103x6x0/0(min=max=4294967295msat)->-103x1x0/1->-103x4x0/1->
#   Flow 3: amount=1300000000msat prob=0.242 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0(max=1799999999msat)->-103x3x0/1->
#   Flow 3: Failed at node #1 (WIRE_TEMPORARY_CHANNEL_FAILURE): failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
#   Flow 3: Failure of 1300000000msat for 103x5x0/0 capacity [0msat,1799999999msat] -> [0msat,1299999999msat]
# we have computed a set of 2 flows with probability 0.084, fees 0msat and delay 23
#   Flow 4: amount=260000000msat prob=0.467 fees=0msat delay=12 path=-103x6x0/0(500000000msat in 1 htlcs,min=max=4294967295msat)->-103x1x0/1(500000000msat in 1 htlcs)->-103x4x0/1(500000000msat in 1 htlcs)->
#   Flow 5: amount=1040000000msat prob=0.179 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0(max=1299999999msat)->-103x3x0/1->
#   Flow 5: Failed at node #1 (WIRE_TEMPORARY_CHANNEL_FAILURE): failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
#   Flow 5: Failure of 1040000000msat for 103x5x0/0 capacity [0msat,1299999999msat] -> [0msat,1039999999msat]
# we have computed a set of 2 flows with probability 0.052, fees 0msat and delay 23
#   Flow 6: amount=120000000msat prob=0.494 fees=0msat delay=12 path=-103x6x0/0(760000000msat in 2 htlcs,min=max=4294967295msat)->-103x1x0/1(760000000msat in 2 htlcs)->-103x4x0/1(760000000msat in 2 htlcs)->
#   Flow 7: amount=920000000msat prob=0.105 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0(max=1039999999msat)->-103x3x0/1->
#   Flow 7: Success
```

Signed-off-by: Rusty Russell <[email protected]>
vincenzopalazzo pushed a commit that referenced this issue Nov 3, 2023
Adding an index means:

1. Add the new subsystem, and new updated_index field to the db, and
   create xxx_index_deleted/created/updated APIs.
2. Hook up these functions to the points they need to be called.
3. Add index, start and limit fields to the list command.
4. Add created_index and updated_index into the list command.

This does #1.

Signed-off-by: Rusty Russell <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant