20:08:05 <niftynei> #startmeeting 20:08:05 <lightningbot> Meeting started Mon Apr 27 20:08:05 2020 UTC. The chair is niftynei. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:08:05 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic. 20:08:13 <BlueMatt> hey yall 20:08:21 * t-bast waves at BlueMatt 20:08:36 <niftynei> hi everyone welcome to the lightning-dev 27 Apr 2020 spec meeting 20:08:49 <lightningbot> cdecker: Error: Can't start another meeting, one is in progress. 20:08:57 <cdecker> Oh, missed that one 20:09:06 <niftynei> #link https://github.com/lightningnetwork/lightning-rfc/issues/768 20:09:10 <t-bast> cdecker tried to hijack the chair!!! 20:09:28 <niftynei> wow this is starting off exciting. 20:09:40 <cdecker> Man, we need a visit to IKEA to get more chairs... end the lockdown 20:09:53 <niftynei> first item on the agenda is stuck channels 20:10:02 <niftynei> #topic stuck channels #740 20:10:16 <niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/740 20:10:35 <t-bast> After many iterations on the wording, I think we're reaching something somewhat clear on stuck channels 20:10:45 <rusty> FWIW, this is implemented in our coming release, too. 20:11:05 <t-bast> There's even one ACK from Joost (thanks!) so maybe another impl to chime in and we should be good to go? 20:11:11 <niftynei> it looks like the PR has two requisite acks 20:11:29 <niftynei> c-lightning has an implemenation, has anyone else implemented this yet? 20:11:40 <t-bast> yep it's in eclair too 20:11:52 <t-bast> done slightly differently though 20:12:39 <rusty> (Whoever commits this please squash it in rather than merge or rebase, since it's 9 commits for one change!) 20:13:19 <t-bast> yep definitely, squash it's going to be 20:13:36 <t-bast> I can commit this if everyone feels it's ready 20:13:37 <niftynei> ok. so it seems we're ready to mergesquash this one? 20:13:47 <rusty> ack! 20:13:55 <t-bast> ack! 20:14:16 <t-bast> roasbeef do you know if lnd has implemented that? 20:14:37 <t-bast> I know johan and joost were mostly looking at this, so since joost ack-ed it's probably underway or done? 20:15:04 <t-bast> I've also seen that val has linked this for RL 20:15:55 <roasbeef> no implementation yet 20:16:04 <niftynei> (i believe val is valwal_ on irc) 20:16:07 <BlueMatt> yea, we have a pr with that and a few other things that valwal_'s doing 20:16:24 <BlueMatt> the change looked good to me last I looked, so happy if it gets merged 20:16:40 <t-bast> great 20:16:43 <niftynei> #action t-bast to squash/merge PR #740 for stuck channels 20:16:56 <niftynei> ok, moving on 20:17:08 <roasbeef> we have some other heuristics we use, they may be included in this tho, end of the day it's all whack-a-mole really, but we'll prob add this eventually 20:17:32 <niftynei> #topic PR #767 prune channels with old channel_updates 20:17:34 <niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/767 20:17:41 <t-bast> roasbeef: :+1: 20:18:56 <niftynei> looks like there's still discussion happening on the PR 20:19:14 <cdecker> I think this one has its logic backward: a channel that is inactive should not receive updates from either endpoints 20:19:17 <t-bast> Who has tried applying bitconner's suggestion to their own node? 20:20:04 <rusty> Hmm, it'd be good to mark which channels fall under this then probe to see if they're *really* dead... 20:20:20 <roasbeef> cdecker: I think he means that one side is dead, but the other keeps updating, but it actually isn't useable at all 20:20:34 <t-bast> At first glance it looked useful to me, but when I applied it the results were bad; it seems to highlight that there are gossip issues in the network, some nodes behaving strangely with their channel_updates 20:20:46 <roasbeef> t-bast: resultings being? 20:20:53 <cdecker> Yes, so why would the other side still be sending updates? Keepalive is not necessary since we'll just reannounce 20:21:11 <rusty> But I do like the idea that you shouldn't update if the other side doesn't. Though how to define this exactly: you would send out one at (say) 13 days, but then you'd suppress the next one I guess. 20:21:14 <roasbeef> cdecker: the other side does as it's still active, and will prob just follow the guidfelines to make sure it sends on before the 2 week period 20:21:29 <rusty> cdecker: yeah, we would reannounce, even if just to say it's disabled. 20:21:44 <t-bast> roasbeef: see my comment on the PR: surprisingly some updates either weren't propagated properly all the way to our node, or weren't emitted in time by the corresponding node (for a channel that's clearly alive) 20:21:48 <roasbeef> lnd will just disable that end tho, once the other party is offline for a period 20:22:08 <roasbeef> t-bast: interesting, i know CL has some suppression limits on nodes, maybe you're running into that? 20:22:20 <rusty> roasbeef: yeah, we would too: if it's offline at the time we need to refresh, it'll get a disabled update. 20:22:56 <sstone> roasbeef: can you clarify the "y'alls continues to send fresh channel_updates" bit ? 20:23:02 <cdecker> My point is that no node should be sending updates for channels that are not routable, rather they should let them be pruned, and then be reannounced if they become usable again, rather than having a constant background radiation of nodes that aren't usable but want to be kept alive 20:23:23 <roasbeef> sstone: didn't write it, I think like updating fees n stuff? 20:23:50 <t-bast> cdecker: agreed, but it seems that all three implementations are supposed to do that (at least we all think our impl does that) 20:23:57 <rusty> Yes, I wonder if c-lightning is suppressing spam too aggressively and causing propagation issues? Though we're too small a mnority to do that in general I think. However, my impression was that we're not seeing as much gossip spam in Modern Times (though let me check...) 20:24:50 <t-bast> rusty: that may explain what I've seen, I failed to receive channel_updates in time from a node that seems to be running lnd (BlueWallet), but I cannot know if that node never emitted the update or if it wasn't propagated all the way to me 20:25:13 <cdecker> Easy, connect directly 20:25:13 <t-bast> And I'm unsure what impl the other end of the channel is running 20:25:40 <roasbeef> t-bast: from stuff gathered on our end, the bluewallet node seems to be funky, I think they're running some custom stuff 20:25:52 <t-bast> roasbeef: ahah that may be another explanation 20:25:56 <cdecker> Interesting 20:26:17 <roasbeef> I think maybe this issue was partially inspired by some weirdness conner saw when examining their node 20:26:36 <roasbeef> I guess let's just wait faor him to comment on the issue, may be able to provide some more insight 20:27:01 <t-bast> allright, sgtm 20:27:28 <cdecker> ack ^^ 20:28:07 <rusty> 2782437 suppressions in ~2 weeks, but you'd have to spam (4/day get ignored) then stop spamming at all for 13 days to get pruned. 20:29:01 <t-bast> rusty: you mean that there are 1782437 channel_update that you received but didn't relay? 20:30:26 <rusty> t-bast: yeah, but sorting by channel/nodeid now. Suspect it's a handful. 20:30:52 <t-bast> that's a lot, that means ~5 per node per day (assuming ~40k nodes in the network) 20:31:10 <t-bast> ah no it's ~2 since there are 2 ends of a channel (doh) 20:32:10 <rusty> Anyway, let's keep moving while I wait for my machine to churn the results... 20:32:21 <t-bast> ack 20:33:07 <niftynei> ok so it sounds like we should continue discussion on the PR 20:33:20 <cdecker> sgtm 20:33:30 <niftynei> #action rusty to figure out c-lightning pruning reality 20:33:42 <niftynei> #action discussion for PR 768 to continue on github 20:33:49 <niftynei> ok let's see what's next 20:34:44 <niftynei> #topic issue 761, which is a discussion branched off of 740 20:34:44 <niftynei> #link https://github.com/lightningnetwork/lightning-rfc/issues/761 20:34:52 <t-bast> I added that to the agenda because it's digging an old topic (asynchronous protocol and its pitfalls) but I couldn't find previous discussions around that...can one of the protocol OGs just quickly skim through that issue and provide some links to previous discussions? 20:35:26 <t-bast> I don't know if it's worth spending time during the meeting, but if someone can take an action item to do a quick pass on this it would be great 20:35:46 <roasbeef> can check it out, but it is the case that there're some fundamental concurrency isuses that can happen w/ the state machine as is, without the existence of some unadd type thing 20:36:40 <t-bast> roasbeef: great, I know that had been discussed in the past, but searching for it in the ML archives I couldn't find anything... 20:36:44 <rusty> Yeah, it's the nature of the beast. You can only prevent it by slowing the entire thing down, basically. Though it's less of a problem in practice. 20:38:44 <niftynei> this is a cool discussion to highlight t-bast 20:38:59 <niftynei> seems like we should keep moving along 20:39:13 <t-bast> ack! 20:39:16 <niftynei> if i've understood correctly, roasbeef is going to give it a look-see 20:39:44 <niftynei> #action roasbeef to check thread out 20:40:02 <niftynei> #info "it's the nature of the beast" - rusty 20:40:14 <niftynei> moving on 20:40:29 <t-bast> I think we could print a t-shirt for that "it's the nature of the beast" 20:40:32 <cdecker> Loving the quote, that should become our official motto 20:40:40 <t-bast> xD 20:40:47 <niftynei> we've now reached the crowd favorite "long term updates" segment 20:40:48 <roasbeef> kek 20:41:30 <roasbeef> blinded paths maybe? 20:41:37 <niftynei> ariard, and THEBLUEMATT would you be ok to start with tx pinning attack? 20:41:42 <cdecker> So say we all "natura bestiorum" xD 20:41:42 <roasbeef> or that 20:41:44 <rusty> Yeah, blinded paths are cooler! 20:41:51 <roasbeef> ;) 20:42:17 <niftynei> the r's have spoken, we'll start blind and then roll into pinning then 20:42:18 <roasbeef> cdecker: sounds kinda occult lol 20:42:34 <roasbeef> would prob make for a bad ass shirt/hoodie 20:42:35 <niftynei> #topic long term updates: blinded paths cc t-bast 20:42:36 <BlueMatt> yo 20:42:44 <niftynei> *BlueMatt sry 20:42:45 <BlueMatt> niftynei: sure, though why am I in CAPS 20:42:49 <cdecker> roasbeef: it's likely also monstrously wrong, my latin is a bit rusty :-) 20:42:59 <niftynei> my shift key finger got lazy :{ 20:43:32 <t-bast> #link https://github.com/lightningnetwork/lightning-rfc/pull/765 20:43:36 <niftynei> t-bast do you want to give us an update? 20:43:57 <t-bast> So the gist of route blinding is that it's doing a Sphinx round from the recipient to an introduction point 20:44:19 <t-bast> And using a key derived from the shared_secrets to blind the scids 20:45:04 <t-bast> It requires nodes *after* the introduction point to receive a curve point *outside* of the onion, to be able to derive a tweak to their node_id that lets them decrypt the onion 20:45:24 <t-bast> First of all it needs more eyes on the crypto itself :) 20:45:34 <t-bast> Then there are two things that need to be challenged 20:45:58 <t-bast> The fact that it adds data to the tlv extension of update_add_htlc, making them distinguishable at the link layer (different packet size) 20:46:31 <t-bast> And the fact that an attacker is able to uncover one of the channels by probing the fees/cltv -> maybe they could do worse? 20:47:14 <rusty> t-bast: it aklso needs a bolt11 extension, though I've been thinking about that. 20:47:45 <t-bast> The added flexibility compared to rendezvous is what gives attackers some probing potential - we need to evaluate how much and how we can mitigate those without building something too complex 20:48:13 <t-bast> rusty: true, it needs changes to Bolt11 as well (but those could be for the best as nobody like the current routing_hints xD) 20:48:21 <cdecker> I guess we couldn't normalize the TLV addition along the entire path, i.e., sending the adjunct point always, even if not needed? 20:48:33 <t-bast> cdecker: yes we definitely could 20:48:52 <t-bast> cdecker: that feels hackish, but it could be the way to go 20:49:17 <cdecker> But we can't blind it normally, since then we'd have the usual 'meet-in-the-middle' issue we had with RV 20:49:19 <rusty> cdecker: you can send an empty ping, too. (Though technically there are rules against sending pings when you're still waiting for a response) 20:49:27 <cdecker> So it'd be a decoy that is sent along 20:50:11 <t-bast> yes, a dummy decoy would be enough since it's only to thwart link layer attacker who would see encrypted bytes, it's just to make the lengths match 20:50:24 <niftynei> i have a dumb question. how does the route composer ensure that the capacity for that route is ok? 20:50:26 <cdecker> I was just thinking we could make it seamless by always going through the motions, to have constant-time and constant-size 20:50:39 <niftynei> or is the path not meant to be sized/usable? 20:51:28 <t-bast> niftynei: you run the path-finding like you'd do for a normal payment; ideally you'd select a path with the biggest capacity and additional channels to leverage non-strict forwarding 20:51:44 <t-bast> niftynei: but it's true that you can't guarantee that it will always succeed 20:52:06 <niftynei> so every payment failure will need a round-trip convo with the path initiator? 20:52:33 <niftynei> "hey this one didn't work, what else you got" 20:52:54 <cdecker> Yep, that's what I think it means 20:52:57 <t-bast> niftynei: that, or you'd provide multiple paths in the invoice, but both of these potentially also give more data to an attacker to unblind :/ 20:53:02 <roasbeef> t-bast: this is an existing probing vector re fees/cltv tho? 20:53:06 <ariard> or do you send directly a set of blinded-path? 20:53:27 <t-bast> roasbeef: exactly, that's what I'm worried about 20:53:29 <cdecker> But it adds more flexibility to what my RV construction can offer (payment_hash and amounts are not committed to, so you could retry with different fees at least) 20:53:56 <niftynei> ack ok, this fits my understanding of the proposal then :) 20:54:19 <rusty> Note: there is a way to combat ctlv/fee probes, and that is to put advice inside the enctlv telling the node what minima to use (presumably, >= its existing minima). Then you can force the path to have uniform values. 20:54:20 <t-bast> I think it's showing that when you give some flexibility around fees and cltv, you make this more usable, but you give away a bit of privacy by adding probing, so we need to find the sweet spot 20:54:38 <roasbeef> yeh as cdecker said, we can start to pad out update_add_htlc at all times, would need to look into the specifics of why it can't be attached in the onion 20:55:15 <roasbeef> how's that failure condition communicated? we assume a new sync connetion between the sender/receiver? 20:55:20 <t-bast> roasbeef: you need it to know the key to decrypt the onion, that's why (at least in the current proposal), because the blinding works by tweaking node_ids 20:55:42 <cdecker> That being said, I think rusty is planning to use these for the offers, i.e., route communication to the real endpoint, not caring about fees, cltvs and amounts, and then we can have iterative approaches to fetch alternative paths in the background 20:56:09 <t-bast> roasbeef: in my mind it should be a single-shot, if we provide interaction between payer/recipient to get other paths I'm afraid the probing would be quite efficient 20:56:48 <ariard> t-bast: but how do you dissociate a real failure from a simulated-one for probing? 20:56:50 <roasbeef> t-bast: doesn't sound realistic in practice tho? like even stuff like a channel update being out of date that you need to fetch or something (I still need to read it fully for broader context too) 20:57:26 <t-bast> rusty: I like the proposal to raise the lower bound for all channels in the path, everyone would benefit from this and it could help with probing - I need to think about this more 20:57:32 <rusty> roasbeef: yeah, they're no more robust than route hints today. 20:57:37 <cdecker> Well, for a communication substrate this works quite nicely, doesn't it? 20:58:09 <t-bast> roasbeef: I don't know, if you do what rusty proposes (raise fee and cltv compared to current channel_update) it may work quite nicely in practice 20:58:26 <t-bast> And for a communication substrate it's fine indeed 20:58:44 <rusty> (Ofc you need to do this for every node on the path, even if it's redundant, since otherwie you're flagging which ones need to be told!) 20:58:53 <t-bast> ariard: what do you mean? who distinguishes that failure? 20:59:39 <t-bast> #action t-bast to investigate making fee/cltv uniform in the blinded path, describe that in the PR 20:59:59 <niftynei> FYI we are about at 1hr of meeting time for today 21:00:14 <ariard> t-bast: like you receive a blinded route in invoice, you do a first probing? you ask for a 2nd invoice with another blinded route...? 21:00:15 <t-bast> already?? :'( 21:01:22 <cdecker> True, I wanted to mention that the path encoded in the blinded path needs to be contiguous and each hop needs to support the extra transport, so this becomes useful only rather late when most nodes have updated 21:01:25 <niftynei> (rusty sneaking in that fee increase he promised at LN-conf in berlin last year) 21:01:29 <t-bast> ariard: the way I see it you shouldn't give two blinded routes to the same payer. But maybe in practice it's simply impossible to avoid, it's a good point I need to think about that a bit more E2E 21:02:13 <t-bast> cdecker: yes it would take time to be adopted, it needs enough nodes to support it 21:02:25 <rusty> t-bast: I also need to implement dummy path extension, so we can put advice in the spec. 21:02:29 <t-bast> cdecker: once it's implemented, the road to E2E adoption will be quite long 21:03:07 <t-bast> rusty: yes I think that part is quite useful too! in combination with uniform fees/cltv it may also fix ariard's comment 21:03:22 <ariard> t-bast: but you need sender authentication and we want both-side anonymity? 21:04:19 <niftynei> should we spend 10min on pinning? 21:04:26 <t-bast> ariard: yeah this is why I think preventing this is hard and potentially not doable, but the uniform fees combined with dummy path extension may allow you to give many blinded paths without risk. I need to spend more time on this 21:04:52 <t-bast> niftynei: ack, I think it would be good to discuss that a bit while we're all (?) here 21:04:57 <ariard> t-bast: yeah let's continue on the PR, replying on your comments 21:05:08 <niftynei> #action discussion to continue on PR 21:05:13 <t-bast> thanks everyone for the feedback, it already gave me a lot to work on! Don't hesitate to put comments on the PR 21:05:38 <cdecker> Need to drop off in 10, but let's quickly fix RBF pinning ;-) 21:05:40 <niftynei> #topic long term updates: mempool tx pinning attack cc ariad & BlueMatt 21:06:13 <niftynei> ariad or BlueMatt do you have a link you could pop into the chat for this? 21:06:37 <ariard> yes optech has a resume 21:06:39 <roasbeef> well there's that recent ML thread 21:06:43 <niftynei> iiuc this was mostly a ML thread? 21:06:55 <ariard> https://github.com/bitcoinops/bitcoinops.github.io/pull/394/files 21:06:59 <BlueMatt> mostly the ml thread i think is a good resource 21:07:09 <BlueMatt> optech has a good summary as well that describes also the responses on the ml 21:07:13 <rusty> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017757.html ... 21:07:14 <BlueMatt> not just the high-level issue 21:07:32 <roasbeef> something I pointed out in the thread is that the proposed HTLC changes would require a pretty fundemantel state machine change, which is still an open design question, what's in the current draft PR gets us pretty far along imo and ppl can layer the mempool stuff on top of that till we figure out the state machine changes 21:07:44 <niftynei> #link https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002639.html 21:08:04 <roasbeef> favoring incremental updates to fix existing issues and make progress 21:08:07 <niftynei> #link https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017757.html 21:08:08 <BlueMatt> roasbeef: I disagree that its "pretty fundamental" 21:08:29 <roasbeef> BlueMatt: it would need a new set before commit_sig, seems big, no? idk how that would even look liek atm 21:08:35 <BlueMatt> but it does require an additional step between update_add and commitment_signed 21:08:49 <roasbeef> new step* 21:09:00 <BlueMatt> ready_to_sign_commitment -> ok_go_ahead -> commitment_signed 21:09:00 <niftynei> #link ttps://github.com/bitcoinops/bitcoinops.github.io/pull/394/files 21:09:01 <BlueMatt> :) 21:09:10 <niftynei> #link https://github.com/bitcoinops/bitcoinops.github.io/pull/394/files 21:09:11 <ariard> like provide_remote_sig_on_local 21:09:14 <roasbeef> ok now ensure that works on teh concurrent async setting 21:09:29 <roasbeef> just saying it isn't fully figured out yet 21:09:29 <BlueMatt> no problem, ok_go_ahead ignores any things going the other way 21:09:37 <BlueMatt> and is "a part of the commitment_signed" 21:09:42 <BlueMatt> i dont think thats complicated 21:09:51 <roasbeef> the devil is always in the details ;) 21:09:59 <ariard> monitoring mempool isn't a really fix it's easy to pour a local conflict in your mempool while announcing something else to the rest of netwrok 21:10:08 <roasbeef> but just pointing out how it sprwals, and ppl can deploy a solution that fixes a lot of issues today as we have 21:10:10 <BlueMatt> but, anyway, I think if we want to move forward, then we can do anchor without any htlc transaction changes, that seems reasonable and leaves this issue as-is 21:10:15 <ariard> and identifying mempools of big ln nodes shouldn't be that hard due to tx-relay leaks 21:10:26 <roasbeef> ariard: yeh and the other direction is mempool fixes, as jeremey rubin mentioned in the thread 21:10:47 <ariard> roasbeef: on the long term we agree, but it may takes months/years :( 21:10:49 <BlueMatt> note that rubin mentioned specifically that he does *not* have any mempool changes to fix this queued up 21:10:55 <roasbeef> BlueMatt: yeah i'm on board w/ that, then we can continue to grind on this issue in the background 21:11:01 <BlueMatt> only that he's slowly working on slowly making discussing such changes an option :/ 21:11:01 <niftynei> can someone summarize the main discussion point rn? 21:11:32 <BlueMatt> niftynei: essentially you dont learn preimages from txn in the mempool, this may result in you not getting the preimage. 21:11:41 <ariard> niftyney: what fixes can we come up which is reasonable to implement in a short timeline? 21:11:45 <roasbeef> niftynei: move forwaard w/ anchors as is rn that has this issue with htlcs, or try to finish up this other working thing that requires some bigger changes to the statemachien which aren't fully figured out yet 21:11:49 <BlueMatt> there isnt really any fix that can be done to the mempool policy that fixes it any time soon, and maybe ever. 21:11:58 <niftynei> ty! 21:12:04 <roasbeef> down with the mempool! 21:12:15 <cdecker> Maybe this is a dumb question: but why doesn't the RBF rule ask for a pro-rata increase in feerate, rather than a linear diff. Pro-rata (say each replacement needs 10% higher feerate) we'd have a far smaller issue with spamming, and we could lose the absolute fee diff requirement 21:12:16 <ariard> package relay should make us safe 21:12:27 <BlueMatt> we can probably do something with anchor that, like, has different htlc formats based on the relative values of the htlc to the feerate 21:12:29 * rusty starts reading thread, confused about why we're talking about changing the wire protocol... 21:12:31 <BlueMatt> but we'll need to figure that out 21:12:47 <roasbeef> cdecker: I think it does, but one issue is the absolute fee increase requirement, if it was just feerate this particular pinning issue wouldn't exist 21:13:23 <ariard> lockingdown htlcs and requiring CPFP for them will make them costlier which means more dust htlcs 21:13:25 <cdecker> That's my point: an exponential increase in feerate could replace the absolute fee increase requirement, couldn't it? 21:13:55 <niftynei> so to summarize: there's a mempool related info propagation problem that impacts htlc txs? 21:14:03 <ariard> cdecker: it asks for a pro-rate, bip125 rule4? 21:14:07 * BlueMatt notes that the whole rbf fee-based policy thing is *not* the only issue with rbf-pinning, so its somewhat of missing the point to focus on it. 21:14:07 <roasbeef> niftynei: yeh 21:14:19 <niftynei> kk 21:15:04 <cdecker> BlueMatt: that's true, but we've been burned by it a couple of times, so I was just wondering 21:15:25 <ariard> cdecker: yeah but for performance reasons it's not implemented right now, you just sum up absolute fees of conflicted tx + descendants 21:16:03 <roasbeef> niftynei: the anchors PR on rfc solves some but not all of the issues, but a big thing is that yo can actully modify the commitmetn fee finally post broadcast, and also bump htlc fees. this issue can degrade into a bidding war for htlc inclusion basically 21:16:05 <niftynei> cdecker russell o'connor had a bitcoin ML post re: adjusting the feerate floor for RBFs a while ago 21:16:50 <cdecker> I mean if every replacement just required a 10% increase in feerate (and dropped the absolute fee increase requirement) wouldn't we end up in confirmation territory very quickly, and a node could actually decide locally whether the bump will be accepted or not 21:16:57 <BlueMatt> there's also the 'likely to be rbf'd' flag suggestion if we want to get into mempool policy changes 21:17:07 <cdecker> Oh, I'll go and read that, thanks niftynei ^^ 21:17:42 <ariard> roasbeef: sadly there is clever way to pin by exploiting ancestor size limit to obstrucate replacement 21:18:13 <niftynei> cdecker https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015717.html 21:18:14 <cdecker> BlueMatt: while not directly actionable I do get the feeling that a mempool RBF logic simplification is the only durable fix for these issues, but I'm happy to be corrected on this one :-) 21:18:43 <BlueMatt> anyway, so it sounds like the next steps are: a) amend anchors to make no changes to htlc signatures, and resolve the other issues (needs tests cases, debate about 1-vs-2 etc) on the ml or issue, then b) work on redoing things to use anchors (maybe based on feerates) in htlc txn 21:18:48 <roasbeef> ariard: yeh there's prob a ton of other pinning vectors, but at least we can make a step forwrd to actually allow fee updates, maybe just a summary on all the ways pinning can happen would be helpful, I don't think those limits had stuff like this in mind when they were added (as stuff like this didn't really exist) then 21:18:57 <roasbeef> BlueMatt: +1 21:19:00 <cdecker> BlueMatt: sgtm 21:19:22 <niftynei> #action amend anchors to make no changes to htlc signatures, and resolve the other issues (needs tests cases, debate about 1-vs-2 etc) on the ml or issue 21:19:32 <niftynei> #action work on redoing things to use anchors (maybe based on feerates) in htlc txn 21:19:36 <BlueMatt> cdecker: I tend to agree, but I'm not sure what it *is* - there's a very good reason why it is what it is around DoS issues, and while I agree its not ideal even in its own right, the solution isnt clear. I have lots of ideas, but the resources needed to implement them are....no where near there 21:20:00 <t-bast> ariard: :+1: 21:20:00 <roasbeef> yeh would need to be like a big initiative 21:20:28 <ariard> roasbeef: true, all mempool rules should be documented as in bip125 21:20:35 <niftynei> i'm sorry to say that we are out of time 21:20:37 <ariard> to make it easier to reason for mempool doing offchain stufff 21:20:41 <ariard> *people 21:21:00 <niftynei> thanks for the great discussion today everyone and t-bast for the agenda 21:21:05 <BlueMatt> + 21:21:06 <BlueMatt> 1 21:21:13 <niftynei> and cdecker for the co-chair assist ;) 21:21:27 <t-bast> thanks niftynei! 21:21:27 <niftynei> i'm going to close the meeting but feel free to continue discussion 21:21:36 <niftynei> #endmeeting