Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lightning Specification Meeting 2021/11/08 #933

Closed
5 of 21 tasks
t-bast opened this issue Nov 5, 2021 · 3 comments
Closed
5 of 21 tasks

Lightning Specification Meeting 2021/11/08 #933

t-bast opened this issue Nov 5, 2021 · 3 comments

Comments

@t-bast
Copy link
Collaborator

t-bast commented Nov 5, 2021

The meeting will take place on Monday 2021/11/08 at 7pm UTC (5:30am Adelaide time) on Libera Chat IRC #lightning-dev. It is open to the public.

Pull Request Review

Long Term Updates

Backlog

The following are topics that we should discuss at some point, so if we have time to discuss them great, otherwise they slip to the next meeting.

@t-bast t-bast pinned this issue Nov 5, 2021
@Roasbeef
Copy link
Collaborator

Roasbeef commented Nov 8, 2021

Did we lose the video cal link? I really think splicing that in at times (even if everyone os just on voice) would really let us utilize our time for efficiently, and save minutes of back and forth on IRC, that could otherwise be a very short statement.

@t-bast
Copy link
Collaborator Author

t-bast commented Nov 8, 2021

Let's re-add the video link for future meetings then!

@t-bast
Copy link
Collaborator Author

t-bast commented Nov 9, 2021

Meeting logs:

<t-bast> #startmeeting Lightning Spec Meeting
<lndev-bot> Meeting started Mon Nov  8 19:02:57 2021 UTC and is due to finish in 60 minutes.  The chair is t-bast. Information about MeetBot at http://wiki.debian.org/MeetBot.
<lndev-bot> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
<lndev-bot> The meeting name has been set to 'lightning_spec_meeting'
<t-bast> #topic Warning messages
<t-bast> #link https://github.com/lightning/bolts/pull/834
<t-bast> There are only a couple comments remaining on this PR, and it's already in 3 implementations, so I guess it's high time we merged it? :)
<BlueMatt> https://github.com/lightning/bolts/pull/834#discussion_r719977416 was the last unresolved thing, iirc
<t-bast> There's the question of all-zeroes and a few clean-up comments (should be easy to fix)
* limping ([email protected]) has joined
<t-bast> I agree with BlueMatt to keep the all-zeroes option for now, it was already there, it's less friction to keep it 
<niftynei> all-zeroes seems useful for peers with multiple channels :P
<cdecker[m]> Agreed
<rusty> I think the all-zeros thing is unnecessary, since if you have closed all my channels, you'll error each one as I reestablish.
<BlueMatt> note the debate is about all-zero *errors* (ie close-channels)
<t-bast> niftynei: right, I even forgot, c-lightning doesn't care, all-zeros or not is the same for you xD
<BlueMatt> not all-zero *warnings*, which is presumably the default for most messages
<t-bast> yep
<BlueMatt> rusty: I really dont get why we need to rip something out just because you can emulate it with a reconnect loop
<t-bast> rusty: but if you have some weird logic to not initiate the reestablish yourself (because mobile wallet that waits for an encrypted backup for example)
<rusty> BlueMatt: because it's an unnecessary complication, AFAICT?  Like, tell me the channel you have a problem with, precisely!
<BlueMatt> rusty: if you have an issue with a peer, you dont know the channel precisely
<rusty> t-bast: hmm, ok, fair enough.
<BlueMatt> that's the point, if the peer is doing *handwave* then you've presumably closed your channels and may not even be tracking them
<BlueMatt> like, our main message responder doesn't know about channels that are closed
<rusty> BlueMatt: yeah, so I was thinking you'd respond (as you naturally would) to any unknown channel with an error for that thcannel.
<BlueMatt> sure, we can respond to messages with close errors, but, like, we cant like the need-to-be-closed channels, cause they're off in onchain-enforcement land
<rusty> You definitely don't want to close all channels, if they mention one you don't know.
<rusty> You only want a general error if you have blacklisted them or something AFAICT.
<BlueMatt> yes, of course we do that, but my point is still more generally about the peer
<BlueMatt> yes
<BlueMatt> exactly
<BlueMatt> I think we agree, you just think the error is entirely useless to the point we should remove the code, I think its marginally useful and we might as well keep it
<rusty> OK.  Seemed like a chance to simplify but I concede :) I'll restore that part.  Anything else?
<BlueMatt> IIRC I was ack modulo deciding that
<BlueMatt> I would have to re-review it, there may have been a few nits in my last review
<t-bast> Great, I think we're good on that front then. Once that's restored, we can do a last round of review for the nits, and then merge?
<BlueMatt> sgtm
<t-bast> #topic Clarify channel reestablish behavior when remote is late
<t-bast> #link https://github.com/lightning/bolts/pull/932
<t-bast> This one is interesting, I'd like other implementers feedback on that
* crypt-iq has quit (Quit: Client closed)
<t-bast> We started creating safeguards for big nodes, where when you detect that you're late when restarting you give a chance to the node operator to check whether they messed up the DB
<cdecker[m]> Doesn't thos break SCBs?
* sr_gi has quit (Read error: Connection reset by peer)
<t-bast> But when testing it against other implementations, we realized it doesn't work because most implementations close instantly when they receive a channel_reestablish that indicate their peer is late
* crypt-iq (~crypt-iq@2603-6080-8f06-6d01-f49d-d255-78f9-d036.res6.spectrum.com) has joined
<t-bast> They should wait for the late peer to send an error before closing, shouldn't they?
<t-bast> cdecker[m]: why? can you detail?
<cdecker[m]> afaik the SCB restore uses the outdated -> close semantic to recover funds, don't they?
<t-bast> cdecker[m
<roasbeef> outdated?
<t-bast> but stopping the node instead of going forward with the close shouldn't impact that?
<roasbeef> you mean the non-static key route, where you need to obtain a point? 
<cdecker[m]> Well, how does an SCB restore cause the remote side to unilaterally close the channel?
<niftynei> right, "MUST NOT broadcast" and "fail the channel" are conflicting, per our current definition of "fail the channel"
<roasbeef> we never delete the data needed to send the chan reest to like ppl SCB restore btw 
<t-bast> you just give the node operator a chance to detect that they pointed to an old DB, and restart with the right one? Or if they really lost data, then move forward with the close
<BlueMatt> I'm not really a fan of separating "send an `error`" and "fail channel" - they're the same thing right now, afaiu, and separating it into "counterparty should force-close" vs "we force-close" sucks, especially given lnd has always ignored errors.
* sr_gi ([email protected]) has joined
<cdecker[m]> Ok, must've misinterpreted how SCBs cause the channel closure, my bad
<t-bast> BlueMatt: it's not separating them?
<t-bast> BlueMatt: you mean it's misphrased in the PR
<roasbeef> so change here is just to send an error instead of force closing? like an attempt to make sure ppl dont' breach themselves? 
<niftynei> does receipt of an error invoke a channel close from the peer in all cases?
<t-bast> But conceptually, do you agree that you should only close when sending/receiving the error, not on detecting a late channel_reestablish?
<roasbeef> lnd pretty much never force closes when it gets an error, only invalid sig is the main offense 
<niftynei> in my reading it's not a change to the behavior, just a spec wording change
<BlueMatt> t-bast: maybe I misread this? I understand the current pr text to say "send an error message, hoping the other party closes the channel, but keep the channel in active mode locally and let them restart and maybe reestablish/actually use the channel again in the future"
<roasbeef> t-bast: ok so that's the change? wait until the error instead of closing once yyou get a bad chan reest?
<t-bast> The main issue that's not obvious is that currently, implementations aren't really following the spec: they're trigger-happy and force-close when they receive an outdated channel_reestablish, instead of waiting for the error message
<t-bast> BlueMatt: if you're late, you cannot force-close yourself, your commitment is outdated
<roasbeef> isn't that what the spec says to do rn? force clsoe if you get a bad chan reest
<cdecker[m]> I see, that makes sense
<crypt-iq> why would you send a chan reestablish if you aren't ready
<t-bast> roasbeef: yes exactly! But the implementations don't do that xD
<BlueMatt> t-bast: sorry, I dont mean "broadcast state" I mean "set the channel to unusable for offchain state updates"
<t-bast> BlueMatt: oh yeah, I agree this shouldn't change
<roasbeef> t-bast: and instead you observe they close earlier w/ some unknown trigger?
<roasbeef> did we lose the video call link this time? 
<niftynei> it feels like the behavior t-bast describes is what the original spec was intending but it's much clearer with the proposed update
<t-bast> exactly what niftynei says
<BlueMatt> its not clear to me what changes are made by this text?
<t-bast> Ok let me try to summarize it better
<niftynei> nothing changes to intent, it just makes some current (wrong) behavior explicitly incorrect? iiuc
<BlueMatt> "fail the channel" without "broadcast its commitment transaction" sounds to me like "send an error message and forget the channel, maybe tell the user to think hard about broadcasting"
<roasbeef> this would be a larger divergence tho? like all the existing nodes would keep closing on chan reest recv 
<rusty> Hmm, generally we close when we send an error, not when we receive an error.  You're supposed to do *both* ofc, but history...
<t-bast> Alice has an outdated commitment and reconnects to Bob. Alice sends `channel_reestablish`. Bob detects Alice is late. Most implementations right now have Bob force-close at that point.
<t-bast> Instead Bob should wait for Alice to send an error, then force-close.
<roasbeef> why?
<t-bast> I believe the spec meant that, but since implementations did it differently, it's worth clarifying the spec a bit?
<BlueMatt> ok, let me rephrase, its unclear to me how the pr changes that text :p
* Anorak ([email protected]) has joined
<t-bast> Alice cannot "fail the channel", she cannot broadcast her commitment, she can only send an error to Bob and mark the channel was "waiting for Bob to publish latest commitment"
<roasbeef> I don't see what that change achivies tho, you just want them to wait to send that error? 
<rusty> BlueMatt: we don't define "fail the channel", but logically it's "return errors from now on and broadcast the latest commitment tx". If you can't do the latter, then the definition still works.
<BlueMatt> t-bast: sure, yes, that's how i read it previously? I guess my confusion is - the pr diff here seems to change nothing about behavior, but your description here includes a propose behavior change.
<t-bast> roasbeef: yes, I want Bob to publish his commitment only once he receives an error, not on receiving a channel_reestablish: that gives Alice time to potentially fix her DB and avoid that force-close
<roasbeef> t-bast: you seem be arguging that fromthe sending node? but the reciving node is the one that can actually force close 
<BlueMatt> rusty: right, the text diff seems fine, i guess, my point is more that it doesnt, to my eye, indicate behavior change.
<roasbeef> mid convo (like any other message here), IRC link injection: https://github.com/lightning/bolts/issues/933#issuecomment-963494419 
<t-bast> Agreed, it doesn't indicate behavior change, but most implementation's behavior does *not* match the spec currently, and it would probably be worth fixing the implementations?
<t-bast> So I thought it was worth bringing to attention
<t-bast> We haven't tried rust-lightning though, maybe you implement it correctly :)
<BlueMatt> t-bast: ok, my point, I think, is that that feels like an entirely separate conversation to the pr itself
<BlueMatt> the pr seems fine, I think
<roasbeef> t-bast: do you know which impls deviate rn? 
<t-bast> Agreed, but I don't know how else to have that discussion (maybe open an issue on each implementation to highlight that its behavior diverges from the spec?)
<niftynei> wait t-bast does this mean that some channels drop old state to chain?
<roasbeef> I don't see how it's wrong still, if you lost state, you send me the chan reest, I force close 
<t-bast> IIRC we tested lnd and c-lightning, but I'd need to double check with pm47
<roasbeef> this seems to be saying that I should instead send an error?
<roasbeef> niftynei: that's what I'm trying to get at 
<BlueMatt> issues on implementations or an issue on the spec repo seems like a reasonable way to have that discussion
<t-bast> roasbeef: no, you should wait for my error instead of closing
<roasbeef> t-bast: ....why?
<t-bast> roasbeef: if you close instantly, you didn't give me a chance to notice I'm late, and potentially fix my DB if I messed it up
<rusty> If you're not going to close the channel because peer is behind, you really should be sending a warning I guess?
<t-bast> roasbeef: if I can fix my DB and restart, we can avoid the force-close and the channel can keep operating normally
<roasbeef> t-bast: assuming I haven't sent it yet, or? seems like a concurrency thing?
<roasbeef> if you've lost state, can you really fix it?
<t-bast> BlueMatt: noted, that's fair, I'll open issues on implementations then
<t-bast> roasbeef: if you've lost state no, but if you've just messed up your restart you could fix it, so it's really dumb not to give you this opportunity
<BlueMatt> t-bast: an issue on the spec seems reasonable too, and thanks for flagging, just very confusing to do it on an unrelated pr :p
<niftynei> this change in the spec is in the case where you're explictly not supposed to be dropping your commitment tx to chain; does fail the channel mean something else?
<t-bast> roasbeef: it's not a concurrency thing, Bob has no reason to send an error, only Alice does, so if she doesn't send anything Bob shouldn't force-close
<niftynei> who's Bob in this case? is Bob behind?
<t-bast> I don't want to hijack the meeting too much on that issue though, I'll create an issue on the spec and on implementations with a detailed step-by-step scenario
<t-bast> In the example I put above, Alice is late
<niftynei> right but this spec PR change only deals with Alice's behavior? i think?
<t-bast> #action t-bast to detail the scenario in a spec issue and tag faulty implementations
<niftynei> will wait for more info tho, ok what's next
<t-bast> niftynei: you're totally right, that's why BlueMatt is right that what I'm disucssing isn't completely related to this PR, which is a bit confusing xD
<t-bast> #topic Drop ping sending limitations
<t-bast> #link https://github.com/lightning/bolts/pull/918
<niftynei> ah that is confusing! ok thanks
<BlueMatt> roasbeef: finally responded 15 minutes ago, so dunno if there's more to be said aside from "lnd folks need to decide - they're still waffling"
<roasbeef> I think this is the issue w/ IRC, when this was brought up, I said we didn't feel strongly about it, and ppl could do w/e, we didn't commit to rate limiting 
<roasbeef> then we spent like 30 mins on the topic, only to eventually move on w/ nothing really moving forward
<t-bast> But in a way this change prevents you from rate limiting in the future, right? Unless you choose to become non spec compliant?
<BlueMatt> roasbeef: feel free to respond on https://github.com/lightning/bolts/pull/918#issuecomment-963501921
<BlueMatt> what t-bast said
<BlueMatt> anyway, we dont need to spend more meeting time on this.
<BlueMatt> it seems like roasbeef has NAK'd it and it should die instead. on the LDK end we'll probably just keep violating the spec cause whatever.
<roasbeef> t-bast: that is indeed the case, but it's about optionality, we're not committing to rate limiting rn, but hold open the possibilty of doing it in the future, this is where IRC really falls shrot communication wise 
<rusty> YEah, we don't rate-limit.  But in my head there's this idea that we should keep a "useful traffic vs useless waffle" counter for incoming traffic and send a warning and disconnect if it gets over some threshold (or start ratelimiting our responses).
<BlueMatt> roasbeef: this isn't an irc issue, methinks
<roasbeef> my comments were interpreted as me trying to block the proposel, but I didn't care and was just providing commentary 
<t-bast> Ok, fair enough, let's not spend too much time on this and move on ;)
<roasbeef> idk go back and look at those logs and see if those 30 mins were productiveily used
<roasbeef> maybe I'm spoiled now after our recent-ish meat space time 
<t-bast> Let's go for one we haven't discussed in a while...wait for it..
<t-bast> #topic Route Blinding
<BlueMatt> roasbeef: the issue appears to be that you think "well, we'll just violate the spec later cause we dont care about the spec" is totally fine way to not-nack a pr
<t-bast> #link https://github.com/lightning/bolts/pull/765
<BlueMatt> but its really nack'ing it in everyone else's mind.
<t-bast> Yay route blinding!
<BlueMatt> that's not an irc issue
<roasbeef> BlueMatt: you're putting words in my mouth, I didn't commit to anything, just that it's possible for someoen to want to eventually rate limit pings 
<t-bast> We've been making progress on onion messages in eclair (we'll merge support for relaying this week) and it has route blinding as a pre-requisite
<t-bast> So it would be interesting to get your feedback!
<t-bast> I've updated the PR, so it has both a `proposals` version that's higher level and the spec requirements
<BlueMatt> roasbeef: its a requirements issue - you seem to have an entirely different view of what the spec is for from others. anyway, feel free to comment on the pr.
<t-bast> I just need to add one more test vector for blinding override and it should be ready
<roasbeef> route blinding is on my things to take a deeper look at along the way to checking out the latest flavor of trampoline 
<niftynei> this is exciting t-bast!
* lucasdcf (~lucasdcf@2804:431:c7d8:2181:b9:45a5:da6f:a27d) has joined
<rusty> t-bast: yes, we need to do double-check test vectors.
<t-bast> roasbeef: yay! I'm curious to get your feedback on the crypto part
<BlueMatt> t-bast: nice! is there a direction on cleaning up the onion messages pr?
<rusty> #action rusty check https://github.com/lightning/bolts/pull/765 test vectors
<t-bast> rusty: I've updated the tlv values we discussed yesterday, they should be final and the test vectors reflect that
<rusty> t-bast: :heartH
<ariard> t-bast: i'll try to review route blinding again soon, already done few round in the past
<t-bast> The onion messages PR will then be rebased on top of route blinding, it should clarify it
<t-bast> to be honest, it's really much easier to review while implementing it: without code, it's hard to figure out the important details
<BlueMatt> t-bast: ah, that's...confusing, but okay.
<t-bast> Well actually onion messages doesn't even need to rebase on route blinding
<t-bast> It can simply link to it
<BlueMatt> t-bast: eh, I implemented onion messages too, and still found it impossibly confusing, too many unrelated things everywhere
<t-bast> What can help is looking at how we implemented it in eclair
<t-bast> We first made one commit that adds the low-level route blinding utilities + tests that showcase how it could be used for payments
<t-bast> Then we implemented onion messages using these utilities
<BlueMatt> t-bast: the spec should stand on its own, but I'll look at route blinding, maybe its cleaner spec than onion messages
<t-bast> To be honest the most confusing part is not mixing the different tlv namespaces and correctly defining your types, then it kinda feels natural
<niftynei> off topic: is route-blinding a first-step to trampoline payments?? 
<BlueMatt> iirc onion messages just links to route blinding for parts, which was part of my "ugh, wtf"
<t-bast> niftynei: it's completely orthogonal to trampoline - it can be used by trampoline for better privacy
<niftynei> sounds like it's definitely one to onion messages lol
<t-bast> If you're interested, this PR implements the crypto utilities (the code part is really small, it's mostly tests): https://github.com/ACINQ/eclair/pull/1962
<t-bast> Then this PR uses it for onion messages: https://github.com/ACINQ/eclair/pull/1962
<rusty> BlueMatt: yeah, will rebase, should be nicer.
<BlueMatt> cool! thanks rusty 
<t-bast> Shall we move to another topic? I wanted to give you the latest updates on route blinding, but it's probably better to review it on your own when you have time. Don't hesitate to drop questions though!
<t-bast> roasbeef: if you have a few minutes to space this week, can you quickly address the comments in #903 and #906 so that we get these clean-ups merged?
<rusty> Yeo!
<t-bast> *to spare
<t-bast> #topic dust limit requirements
<t-bast> #link https://github.com/lightning/bolts/pull/919
<t-bast> Rusty, you had a counter-proposal there, I think it would be interesting to discuss it?
<t-bast> Since ariard is here as well
<ariard> yep
<rusty> So, this PR was unclear to me.  It's basically "have your own internal limit, fail channel if it passes that".
<rusty> But that's a recipe for exploitable channel breaks, really.
<ariard> well i think you have 2 recommendations a) if dust HTLC above your dust_limit_exposure, cancel this HTLC _without_ forward
<BlueMatt> its not clear how you do better without 0-htlc-fees-on-anchor
<rusty> In practice, dust is limited by (1) number of HTLCs (until we get that infinite-dusty-htlcs feature) and (2) feerate.
<ariard> and b) if increasing update_fee, either fail channel OR accept the balance burning risk
<t-bast> rusty: there's only a fail-channel option if you're not using anchor_outputs_zero_fee_htlc_tx, I think it's an important point to note
<ariard> the fail-channel option have always been deferred to the implementors, and iirc LDK and eclair don't have the same behavior here
<rusty> t-bast: why?  I don't see how that changes the problem?
<ariard> like I fully agree there is risk for channel breaks in case of fee spikes, it's to be balanced with the risk of loosing money
<t-bast> I think that since all implementations have added that fail-safe, and new ones will directly target anchor_outputs_zero_fee_htlc_tx, we don't need to bikeshed it too much, it will soon be obsolete
<ariard> ideally better to have a knob and deferre the choice to node operators
<t-bast> rusty: when using anchor_outputs_zero_fee_htlc_tx you're simply not at risk when receiving update_fee
<t-bast> because it doesn't change your dust threshold, so it doesn't impact your dust exposure
<t-bast> so when using anchor_outputs_zero_fee_htlc_tx there's no case where this PR introduces a force-close
<ariard> wit zero_fee_tlc_tx, 2nd stage HTLCs are committed with 0-fees
<niftynei> there's definitely an indeterminate point at which htlc failures begin to occur, but that's not much different from balance exhaustion
<rusty> t-bast: right.
<niftynei> it's just *not* balance exhaustion
<crypt-iq> perhaps the fix is to upgrade to zero-fee-anchors
<ariard> well it's just your channel become useless to route to for a class of low-values HTLC
<niftynei> bigger htlcs will still succeed; this will have implications for route algos that use historic payment success data to choose routes (cdecker[m])
<rusty> t-bast: but if I add more dust than you want, what happens?  We didn't fix the problem, you still could be stuck with too much dust?
<ariard> crypt-iq: though maybe we still need a dust_limit_exposure with the infinite-dusty-htlcs feature ?
<rusty> ariard: definitely.  A total dust option is required for that.
<t-bast> rusty: you just fail it instead of relaying it (would be nicer with an "un-add", but it's not dangerous so never leads to a force-close)
<cdecker[m]> Correct
<niftynei> im pretty sure it's required... what rusty said
<crypt-iq> ariard: can just fail back ?
<niftynei> the real gotcha here is feerate spikes
<rusty> t-bast: there's an exposure window though :(  We tried to avoid that.
<t-bast> rusty: no there's not, that's why it's interesting!
<niftynei> at least with an ahead of time known limit, you know when you'll be feerate spiking into bad territory
<t-bast> rusty: because since you haven't relayed it, right now it's only taken from your peer's balance
<ariard> crypt-iq: yes what we already doing with 919, or maybe we can introduce an error for dusty HTLCs but state machine asynchronous issues
<niftynei> whereas right now you kinda just yolo there and then .. maybe the channel closes?
<t-bast> rusty: so it's purely money they lose, not you
<rusty> t-bast: ah!  good point!  I had missed that!!
<ariard> inbound HTLC are subtracted from your peer balance
<t-bast> That's the important point, you can safely receive many HTLCs that make you go over your dust threshold, it only becomes dangerous if you relay them
<t-bast> So if you simply fail them instead, you're always safe
<crypt-iq> ariard: there are async issues but only for the exit-hop case, rather than forwarding
<rusty> OK, so now the trap is that your own sent htlcs get dusted by an update_fee, which is fixed by zerofee-anchor.  Right.
<ariard> t-bast: or accept them as a final payee
<t-bast> (if you use anchor_outputs_zero_fee_htlc_tx or ignore the update_fee case)
<t-bast> yes exactly
<rusty> OK, I withdraw my objection.  I think the spec change should be far shorter though, textually.
<niftynei> AOZFHT ftw lol
<ariard> rusty: yeah if you have suggestion to short/improve i'll take them, there was a discussion with niftynei where to put the changes as it's forwarding recommendations
<ariard> and not purely evaluation of `update_add_htlc`
* lucasdcf has quit (Quit: Client closed)
<niftynei> one thing that'd fix the feerate spike problems is changing the update_fee requirements
<niftynei> and setting the max increase the same as the dust_limit checks for
<t-bast> niftynei: but you can't really do that though
<crypt-iq> what if it actually increases that high
<ariard> niftynei: wdym? a new requirment on the sender?
<niftynei> you can send multiple update_fees, but each can only increase the total feerate by the same factor as the pre-checked amount for the feerate bucket
<t-bast> niftynei: if your peer has been offline for a while, and the feerate really rised a lot, your update_fee needs to match the reality of on-chain feerates, right?
<niftynei> you just send a bunch of them
<crypt-iq> how does that change fee spikes? you fail back earlier ?
<t-bast> well ok, why not...to be honest I simply wouldn't bother and would focus on finalizing implementation of anchor_zero_fee :)
<niftynei> ah wait you're right it doesnt matter if it's sudden or not, as soon as the dust goes over the bucket limit we kill the channel
<ariard> niftynei: i think you're introducing a global as you need to have the same factor across sender/receiver
<niftynei> so it doesnt matter if it's all at once or incremental the real problem is that you've exhausted your budget
<BlueMatt> it appears we're about to run out of time.
<niftynei> timing of budget exhaustion is irrelevant
* gene_ (~gene@gateway/tor-sasl/gene) has joined
<ariard> BlueMatt: better to run out of time than running out of feerate :p
<niftynei> feerate rises are basically a channel bomb now tho
* gene has quit (Remote host closed the connection)
* gene_ is now known as gene
<niftynei> i mean, maybe they've always been?
<ariard> Package Relay Solves This (tm)?
* denis2342 has quit (Quit: denis2342)
<t-bast> ariard: :D
<niftynei> i guess knowing you're going to close the channel isnt' any better than not knowing you're going to close the channel b/c of a feerate rise
<roasbeef> dust begone 
<BlueMatt> fwiw, rusty, apologies we've been bad about spec stuff lately - I'm a bit under the weather but will stick my head back above water on spec stuff this week hopefully. would love to move our onion messages impl forward and also https://github.com/lightning/bolts/pull/910 but it seems like you still wanted to change that to use a channel type? Any updates on that? I've gotta run, but would https://github.com/lightning/bolts/pull/918#issuecomment
<BlueMatt> -963519924 solve your issues, roasbeef? Then we can merge that cursed pr and move on with our lives. anyway, I've gotta run.
<ariard> thoug not really because we don't have infinite fee-bumping reserve
<t-bast> See ya BlueMatt!
<niftynei> there's something kinda ironic here about how lightning is supposed to be the Thing to Use when feerates onchain rise, but also uhh have you seen how lightning runs into problems when feerates onchain rise?
<limping> very interested in #910
<niftynei> i guess the real problem is velocity of change here 
<t-bast> niftynei: and scale / number of channels 
<t-bast> If you have only a few channels, you're probably ok even with a large feerate update
<roasbeef> is it really that diff in the LN case tho? similar scenario of leaking the value of an output if chain fees are high 
<rusty> BlueMatt: NP, thanks!
<niftynei> t-bast, this is for zero-fee anchor outs yeah?  yeah... fewer channels definitely a real win there
<t-bast> niftynei: yes, for that case, ideally with package relay as well (maybe I'm glimpsing too much into the future though!)
<niftynei> i mean the reality of feerates rising is that a swath of utxos become uneconomical; higher feerates means that some subset of bitcoin is 'economically unspendable'
<crypt-iq> then there are less txn for feerate maybe?
<niftynei> lightning failures (htlcs onchain) are like having a front row seat to this particular reality
<rusty> BlueMatt: I will revisit taht PR.  The channel_type is simply a type whihc says "don't ever fwd by real scid", which is simple.
<niftynei> this is definitely not helpful or on topic but interesting nonetheless llol
<t-bast> niftynei: but if the feerate ever comes back down, you'll be able to claim these utxos then, but no guarantee...
<niftynei> which is fine for a wallet of utxos, but htlcs have time constraints iiuc
<ariard> niftynei: yes it's described in the LN paper, ideally we could stuck the "time" for LN time-sensitive closes in case of fees spikes
<t-bast> regarding #910 pm47 on our side spent time experimenting with it, he'll be able to provide more feedback as well
<niftynei> i think you end up circling back around to the observation a friend of mine who works at stripe made about how "payment processors are actuallly insurance companies"
<niftynei> which is to say there is some risk involved in routing payments!
<niftynei> and you should expect a return that justifies the risk involved ;)
<crypt-iq> It would be nice to have a channel_type for zero-conf option-scid_alias and nix the min_conf 0 setting. The spec wording as is basically the acceptor *hoping* that it's a zero-conf channel w/o knowing the intent of the initiator
<t-bast> yes that's true, we should probably get that explained better for routing node operators and get them to raise their routing fees a bit ;)
<t-bast> cdecker[m]: are you still around? I've got a quick q
<niftynei> t-bast, hehe sounds like good blogpost/ blip(spark?) material ;)
<niftynei> i'm headed out, thanks for chairing t-bast!
<t-bast> Let's stop now, meeting time is over and we already covered a lot
<t-bast> #endmeeting
<lndev-bot> Meeting ended Mon Nov  8 20:10:18 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
<lndev-bot> Minutes:        https://lightningd.github.io/meetings/lightning_spec_meeting/2021/lightning_spec_meeting.2021-11-08-19.02.html
<lndev-bot> Minutes (text): https://lightningd.github.io/meetings/lightning_spec_meeting/2021/lightning_spec_meeting.2021-11-08-19.02.txt
<lndev-bot> Log:            https://lightningd.github.io/meetings/lightning_spec_meeting/2021/lightning_spec_meeting.2021-11-08-19.02.log.html

@t-bast t-bast unpinned this issue Dec 6, 2021
@t-bast t-bast closed this as completed Dec 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants