-
Notifications
You must be signed in to change notification settings - Fork 901
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Add information about how many blocks to go until funding is confirmed #2405
Conversation
18da7bd
to
5d363dc
Compare
59f2cb9
to
758abf6
Compare
Ok, I believe normally cdecker does the first review for new contributors, but I have already given some comments (hopefully better advice then the first one). Looks good 👍 |
Thank you again! Your advice are great :) |
OK, this looks great! I'd usually say you should rebase and add a CHANGELOG.md message, but I'll do it in this case to avoid further delays :) Thanks for your patience! |
Ack 7a02553 |
@rustyrussell Thanks for your suggestion:) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding the failures when togo
and funding_locked[LOCAL]
do not agree makes me wonder if that can't happen during a rescan.
channeld/channeld.c
Outdated
} | ||
if (depth < peer->channel->minimum_depth) { | ||
if (peer->funding_locked[LOCAL] || peer->depth_togo == 0) | ||
status_failed(STATUS_FAIL_INTERNAL_ERROR, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This error made me stumble a bit. Are we sure this can't happen during restarts with rescans?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, @cdecker I'm not sure if what I think is correct. Hope you don't mind my pool English.
In the original code, the channel will be saved in the DB in this code block.
But now, to prevent saving channel with unlocked funding, I add the condition local_locked && !channel->scid
. So, these channel with unlocked funding won't be saved until depth >= minimum_depth.
And the struct peer in Channeld always inits from there. The peer in Channeld won't be saved in DB (this is because I don't find function to save this struct. I'm not sure.), and it is only dependent on the channel information in the Lightningd.
I suppose when we restart, we need rebuild this peer struct(in Channeld) according to the channel(in Lightningd) again (it bases the suppose that we don't save peer struct in Channeld).
There may be 4 situations:
- the depth >= minimum_depth(we can get the topo from DB) , and the channel has been saved in DB before: we can init the peer with
peer->funding_locked[LOCAL] == true
(there we use ifchannel->scid
exist to initfunding_locked[LOCAL]
), andpeer->depth_togo == minimum_depth
(I directly set this value to peer->depth_togo). But after some minutes, Lightningd will tell us the depth >= minimum_depth, we won't meet this error. - the depth >= minimum_depth, but we didn't save the channel in time: we can init the peer with
peer->funding_locked[LOCAL] = false
, and we alse init thepeer->depth_togo = minimum_depth
directly. - the depth < minimum_depth, and we saved the channel in the DB: it is impossible! we only save the channel under the condition that funding is locked.
- the depth < minimum_depth, and we didn't save the channel: we can init peer with
funding_locked[LOCAL] = false
andpeer->depth_togo = minimum_depth
.
So I think we won't meet this error when restart. What do you think about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi!@cdecker I think this error won't happen unless the topo reorganize with setting too samll minimum_depth.
The peer we used in Channeld is really initialed according the channel_init message from Lightning. And we never save the peer in the DB directly. In other words, during every start, we need initial the peer->funding_locked[LOCAL] from Lightningd(peer->funding_locked[LOCAL] corresponds if channel->scid exist). So the 4 situations I listed above are reasonable.
I think the only situation that this error happens is:
Lightningd find the funding depth changes and don't hit minimum_depth, then Lightningd tell us the depth now. But before that, funding has hit minimum_depth once and we has drived the channel->scid. The reason of this situation maybe the minimum_depth is too small and topo reorganized.
We don't limit the smallist minimum_depth(BOLT2 just ask mimimum_depth can't be too big, and clightning ask it must be smaller than 10). But I think we don't need to set the smallest minimum_depth. This condition here can be a kind of alert that reorg has happend.
what do you think about?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So IMHO the above check can be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SimonVrouwe You'are right, and I should delete these checks here.
And I also notice a extreme case:
Suppose we set a private channel (don't set announcement flag), and set a too small minimum locking depth (as you say, 1). When we notice the funding tx locked and we drive short_channel_id
and store into the DB, but unfortunately, after these operation, our node crash. When we restart, the reorg happens.
Now we initial our channel from DB with old short_channel_id
, and we may can't be told that reorg happened because we delete the corresponding watchtx
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... and we may can't be told that reorg happened because we delete the corresponding watchtx.
Good catch!
Wallet transactions are never deleted from the db, but at reorg their height
field is set to NULL. When a block is reorged-out, all tx's with that (old) height are selected, and their cb
is fired with depth 0.
But if the watch
was deleted, after only 1 confirm when minimum_depth=1
and ANNOUNCE_FLAG unset, the watch is gone and nothing happens!
@cdecker So this looks like a bug? Although it is a very rare case (how often do tx get reorged to another height?), I will make a new issue about this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a really rare case. The main problem I think is too samll minimum_depth.
Maybe we can set limit of the minimum_depth, or any other operations to handle it(fail the channel, or generate warning...).
I have been running some tests with reorgs. Because we allow small When that happens, the When funding tx is reorged out, the Maybe we should add a python test for these kind of reorgs? I can give it a shot. I made a branch based on your PR which includes suggested modifications. |
@SimonVrouwe Thank you and you help me sooooo much:) ! Let me have a look! |
ddf2853
to
a7fa46f
Compare
@cdecker Thank you, and I've rebased it :) |
446085c
to
9590236
Compare
1. Rename channel_funding_locked to channel_funding_depth in channeld/channel_wire.csv. 2. Add minimum_depth in struct channel in common/initial_channel.h and change corresponding init function: new_initial_channel(). 3. Add confirmation_needed in struct peer in channeld/channeld.c. 4. Rename channel_tell_funding_locked to channel_tell_depth. 5. Call channel_tell_depth even if depth < minimum, and still call lockin_complete in channel_tell_depth, iff depth > minimum_depth. 6. channeld ignore the channel_funding_depth unless its > minimum_depth(except to update billboard, and set peer->confirmation_needed = minimum_depth - depth).
Ack 7f55203 |
I am so sorry that this merge took so long! Great work though! |
This is another try for issue #2150 and issue #1780 as @rustyrussell suggested:
Add
minimum_depth
instruct channel
in common/initial_channel.h and change corresponding init function:new_initial_channel()
.Add
confirmation_needed in
instruct peer
in channeld/channeld.c.Rename
channel_funding_locked
tochannel_funding_depth
inchanneld/channel_wire.csv.
Rename
channel_tell_funding_locked
tochannel_tell_depth
.Rename
funding_lockin_cb
tofunding_depth_cb
in lightningd/peer_cntrol.c.When funding depth changes, Master calls
channel_tell_depth
even ifdepth
<minimum_depth
and tell channeld the funding depth. But inchannel_tell_depth
,lockin_complete
will be called iffdepth
>minimum_depth
.Channeld ignores the funding_depth unless
depth
>=minimum_depth
(except to update billboard, and setpeer->confirmation_needed
=minimum_depth - depth
). Noteconfirmation_needed
instruct peer
must be 0 when funding hitminimum_depth
in locally topology but it doesn't correspond to remote locking state because of network delay.Thank you all for your comments and suggestions!