19:00:17 #startmeeting 19:00:17 Meeting started Mon Feb 18 19:00:17 2019 UTC. The chair is cdecker. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:00:47 #info Agenda for today can be found here https://github.com/lightningnetwork/lightning-rfc/issues/574 19:00:57 hi. 19:01:03 Hi kanzure 19:01:26 And search by tag: https://github.com/lightningnetwork/lightning-rfc/labels/2019-02-18 19:01:32 Oh, by the way please tell me if I should remove someone from the ping list (don't want to wake people up accidentally) 19:02:26 Very short agenda today, let's see if we can get a quorum of all implementations and we should be done quickly 19:02:37 Added https://github.com/lightningnetwork/lightning-rfc/pull/570 19:02:39 hi 19:02:46 Hi johanth 19:03:00 hi johan :) 19:03:35 Ok, so we have Johan from LL, sstone from ACINQ, rusty, niftynei and me from BS 19:03:44 Does that count as quorum? 19:03:58 sounds about right 19:04:04 Cool 19:04:16 #topic Tagging v1.0 19:04:36 https://github.com/lightningnetwork/lightning-rfc/pull/570 Is so nice, I (somehow) acked it twice! 19:04:45 So we wanted to tag v1.0 last time, but there was an issue with the onion test vectors if I remember correctly 19:05:04 https://github.com/lightningnetwork/lightning-rfc/milestone/2 19:05:45 Yes, the intermediary results had not been updated (the final onion result was correct). I went through and validated it against our implementation, and it matches. 19:05:51 Are we sure that the routing_info[4] unencrypted is correct? IMHO that should have the filler bytes at the end 19:07:00 ok, if rusty checked it I'm satisfied :-) 19:07:25 So any objections to applying PR #570? 19:07:34 lgtm :) 19:07:57 utACK :) 19:08:08 #agreed cdecker to merge PR #570 19:08:27 Sounds good, so that should resolve the last open issue for v1.0 19:08:38 Anything that we need to do before tagging? 19:08:58 \o/ 19:09:14 cdecker: Not that I know of.... I will merge and tag immediately after meeting, so anything else we approve here goes *post* v1.0. 19:09:43 Ok, if something comes up, please let me know, otherwise rusty can merge and tag 19:09:59 #agreed rusty to tag v1.0 after merging #570 19:10:36 #topic PR #539: BOLT 3: add test vectors for htlc-transactions in case where CLTV is used as tie-breaker for sorting 19:10:43 Link: 19:10:45 https://github.com/lightningnetwork/lightning-rfc/pull/539 19:11:41 so, i reworked the PR because initially i took too much liberty redefining HTLC #1 and #2, now it should be okay but the vectors need a second ACK 19:12:03 rusty and araspitzu: am I correct in assuming you cross-checked the results and this is to be merged? 19:12:05 #action rusty to validate PR 539 test vectors. 19:12:15 cdecker: no, still a TODO.... 19:12:36 note that after @rusty's comment the content of the vector changed 19:12:42 Ok, so this is more of a call to action to discuss the issue then :-) 19:12:44 at some point, i think we should aim to make all the test vectors machine readable and human parseable 19:12:51 Oh hi roasbeef :-) 19:12:54 But I agree that it's a really nice addition... 19:12:54 rn you need to do a ton of copy/paste 19:12:59 roasbeef: :+1: 19:13:04 Absolutely 19:13:07 one mega copy/paste would be nice 19:13:10 :+1: 19:13:18 agreed, IIRC a json based approach was suggested the last time? 19:13:24 (I have a json onion spec format for my onion proposal, so that should make everybody happy) 19:13:26 yeh many moons ago... 19:13:49 it might be worth looking at the bitcoin core test vectors for some guidance of structure.. at least some people have already written parsing code for that 19:13:52 araspitzu: yes, I have a JSON spec start, but that's more for actual conversations between peers. MIght be nice to have variants for this too. 19:14:04 #agreed everybody to aim for machine-readable test-vectors (in JSON) 19:14:31 I think we should move them out of the docs proper, too, FWIW. JSON in a spec doc seems a bit weird? 19:14:49 (eg test_vectors/01-xxx.json) 19:14:54 Yep, separate JSON docs that can be references would be nice 19:15:02 rusty :+1: 19:15:11 Then we can add new files rather than editing existing ones, too. 19:15:11 1.1 goals? 19:15:18 I was going for boltxy/test-x-y-z.json to group by bolt 19:15:47 Ok, but I think we digress slightly (also we seem to agree on this 110%) 19:15:49 cdecker: nice. Of course, format for test vectors will be a little ad-hoc, since they test different things, but we can get closer. 19:16:05 I'll open an issue to track... 19:16:19 On to feature bit unification :-) 19:16:34 #topic PR #571 Feature bit unification and assignment 19:17:34 Should be rather uncontroversial, except for the newly allocated feature bits 19:18:13 rusty: you just went through the list and assigned bits right? No attempt to group them by topic or anything like that? 19:18:26 cdecker: yep, I just walked the list on the wiki.... 19:18:27 (I fully support that, just making sure) 19:18:32 Sounds good 19:18:44 i still don't get the rationale, w.r.t the unification/split 19:18:46 it's just renaming? 19:19:06 and also combining 19:19:19 but why? 19:19:31 roasbeef: it's just renaming, but also making sure the numbers don't clash, so we can advertize them both together in node_announce. 19:19:42 roasbeef: people want to search for nodes with certain peer features. 19:20:04 eg. option_dataloss_protect. Orgiinally we werent' going to advertize those, since it's a local matter 19:22:07 no it's a global thing also, i can fitler thos peers out to connect to them 19:22:08 Might be an idea to merge channelfeatures and peerfeatures altogether (hard since it's the `init` message) 19:22:12 what's the rationale behind advertising peer+channel features on connection? shouldn't peer be all that matters? 19:22:18 most local/global things also make sense on a global level, as you need to filter out peers to connect to them 19:22:21 There's one hole as the spec is currently proposed. If there's a channelfeature you need, and you only know about the channel because of a bolt11 route hint, there's no way to tell you. You'll try and get a required_channel_feature_missing error 19:22:57 so it combines the name spaces also? 19:22:58 roasbeef: the original "global" was for routing, "local" was for connecting. Agreed you want to advertize nodes on both. 19:23:06 roasbeef: ack. 19:23:55 Great moar profilability :-) 19:24:16 concept ACK and very welcome 19:24:40 rationale is still unclear to me 19:26:00 roasbeef: combined namespace because as you agree you want to advertize both. But they're still separate: there's a difference between "you can't gossip with me" and "you can't route through me". 19:26:46 Hence global/channel features are all you need to see in channel_announce. 19:26:58 We already have both when you connect to the peer, in init msg. 19:27:38 We should have made this a flat bitfield right from the getgo, having multiple bitfields with different interpretations is really confusing 19:28:14 flat bitfield would mean they share a feature space, when they're really distinct 19:28:15 cdecker: maybe the distinction will prove moot. Maybe if you're so old you can't route through me, you should just damn well ugprade. 19:29:35 Well, one option would be to not distinguish; if you don't understand a bit, just don't deal with me at all. If I want to make option_dataloss_protect compulsory on my node, and ancient nodes stop routing through me, do we care? 19:29:44 Mhe, it should a put-your-flags-here scratchspace, and when creating any message I can decide what I'm going to tell you about :-) 19:30:15 But anyhow, too late for that, we're left with the partitioned bitfields 19:30:26 cdecker: ick, that's *really* nasty for a test matrix! I suddently turn on a feature bit halfway through a conversation... 19:30:57 rusty: not really, the only time I tell you about my features is in the `init` message, so you get one shot for that 19:31:28 (cdecker: oh, I thought you were suggesting attaching a feature bitfield to every message!) 19:31:29 Anyhow, seems to me like PR #571 needs some more discussion on the tracker? 19:31:32 But I guess the q. is, are we happy with the draft assignments? We can append more, but it's nice to be able to code against something which feels tangible... 19:32:00 i don't think the channel features rename makes sense, for example we may switch the way we do the handshake in the future and i may need to fish that out of the global feature 19:32:03 Yeah, I'd definitely like to have the bits sorted out, even without the renameing and combining 19:32:12 aside from the rename, i don't really see much here (skimmed the PR a few times) 19:32:47 ohh, it combines them 19:32:57 Yep 19:33:03 roasbeef: yep, for *node announce* only. 19:33:29 rusty: I like the PR but maybe without bits for features that are not really specified yet. and if we change he handshake we'll have the same pb ? 19:33:40 so now they must select distinct bits? 19:33:46 roasbeef: yeah... 19:34:11 sstone: I'd really like to have bits assigned so we can actually start working on this stuff 19:34:18 (We don't have a great place to put meta stuff in the spec, like "when selecting bits in future, don't clash!") 19:35:08 Hm, maybe we should have that then 19:35:36 BOLT 09 seems like an excellent place for such a thing 19:35:40 cdecker: ok, the current selection is consistent except maybe for option_fec_gossip 19:36:16 i don't think bits need to be assigned before we can start working on anything 19:36:20 Hm, why is that? sstone 19:36:24 just select a random bit on testnet, gg 19:36:39 the bigger q is how bits are assinged in the first place 19:36:48 and if there're reserved sections, etc 19:36:57 Yeah right, and then we get the funny clashes we had when lnd joined the spec testnet xD 19:37:02 there's a ton of bits, so don't see it being an issue in the near future, but there's no guidance rn 19:37:20 extra tests for your implementation ;) 19:37:32 Well, I really dislike the way BIP numbers are assigned (with gaps and grouped by topics) 19:37:38 roasbeef: I think we should be pretty free with assigning them, we can always reassign if features never get merged and we want to. 19:37:41 cdecker: because it's the only feature that I don't think we'll get (will be very happy if s.o proves me wrong :)) 19:37:55 We should just incrementally assign the bits so that we end up with a compact representation when serialized 19:38:10 do we need a blockchain for this? 19:38:19 roasbeef: LOL 19:38:31 who comes "first"? 19:38:33 kek 19:38:39 I think generally people should self-assign when they create a PR for the spec, and we do final disambiguation at merge tiime (which, given BOLT 9 clashes, will be obvious) 19:38:40 The one with the LN Torch! 19:39:13 This is group therapy and the torch is out talking pillow xD 19:39:49 sstone: I think there are quite a few features in there that we're unlikely to see realized (2p_ecdsa for example) 19:40:04 ? #action Rusty to write BOLT-BOLT proposing meta stuff like how to assign feature bits? 19:40:25 ok then I guess we can always reassign 19:40:27 "This is group therapy and the torch is out talking pillow xD" <3 19:40:30 (I actually just assigned them in the PR there to avoid this bikeshed, but there we are :) 19:40:42 Ok, it seems we need to bikeshed a bit more on #571, would you agree? 19:41:02 Yep... but FWIW, I'll use those draft feature bits in my test code :) 19:41:15 Sounds good 19:41:25 So let's move on 19:41:38 #agreed everyone to bikeshed on #571 a bit more 19:41:58 #topic PR #572 Specify tlv format, initial MPP 19:42:05 Connor is not here to defend TLV; roasbeef attack! 19:42:19 rusty: i'm here 19:42:49 I kind of dislike even allowing >256 byte values 19:43:28 cdecker: hmm, for the length? We could simply reserve >= 253 19:43:31 So for me `var_int` for the length feels really weird 19:43:40 Yeah 19:43:56 We have 1300 bytes of total payload we can split up along the route 19:44:25 cdecker: it's even more awkward if we need it later though. 255 bytes is not *quite* enough. And this can be used in other msgs, which don't have so much space restriction. 19:44:28 1 byte type seems very limiting, would also say tlv on the wire protocol has more impact than the onion atm 19:44:29 Having a single value consume 20% of that is a bit much (and might cause route-length issues if people really start using it) 19:44:37 tlv should also be optional in the onion as well 19:44:58 roasbeef: it is... a 0 byte terminates it, so if first byte is 0, it's not there. 19:45:02 which means we need a length indicator for each cell/blob, or some padding scheme 19:45:30 TLV is pretty much what gates us for multi-part payments, rendez-vous and spontaneous payments, so I do think that TLV is rather urgent 19:45:56 (Though note this PR doesn't talk about *how* this gets encoded in the onion; that's up to the multi-frame proposals...) 19:46:01 Wait, the 0-byte is useable for padding 19:46:07 all that work can already be underway and proceeed concurrently really 19:46:29 but for the cases that want a compact encoding, the tlv just wastes a ton of space, given that there's 32/65 bytes available per hop 19:46:29 why does even/odd need to be in the encoding, rather than in feature bit neogitation 19:46:29 Yeah, but we'd be reinventing the wheel when it comes to serialization formats 19:46:40 but if there's no outside type byte, imo you need 2 bytes for the type 19:46:44 roasbeef: wastes 1 byte you mean? 19:46:58 if the outside type byte is there (tlv optional) then there's more working room 19:47:21 What outside type byte? Where is that signaled? 19:47:24 which gets us to 65k combos which should be nough, 256 seems too small given app developer interest in using this space 19:47:27 Do you mean the realm? 19:47:46 imo the realm should be kept in tact, and another byte from the nion padding used to signal the type 19:48:01 just for clean seperation, "this is the chain", "this is the metadata" 19:48:02 roasbeef: yes, that's exactly what TLV *is*. 19:48:10 ? 19:48:21 the byte used to signal the type is the T in TLV :) 19:48:30 yes, 256 isn't enough imo 19:48:37 or 55 w/e 19:49:33 Ok, we can make the type `var_int` (but I doubt we're ever going to hit that) 19:49:35 imo onion tlv should be considered separate from wire tlv. not that i want the extra complexity, but they have diff constraints 19:50:00 either two bytse there, or the outside type, don't think var int needed there 19:50:01 roasbeef: I really can't see more than a few dozen. If app level want to do something, they need to signal it somehow anyway. 19:50:21 rusty: i constantly get messages from devs asking hwo they can use the space for their applications 19:50:37 bitconner: I was trying to start with general TLV stuff, then go into the onion. I really don't want multiple TLVs if we can avoid it. 19:50:47 you may not see the possibilities, but the devs do 19:50:52 roasbeef: yeah, me too. 19:51:17 the two areas have diff constraints (wire vs onion) 19:51:19 Well then tell them to use TLV + different realm that gives you 2^4 * 256 possible types 19:51:58 type byte would be 65k, breaking off bits from real seems unecessary 19:52:12 rusty: tbh the only that would really change between the two is the varint (i think?), which is just a matter of changing constants 19:52:40 roasbeef: we can certainly assign a "end2end" type pair, and they can put whatever they want in there. 19:53:11 bitconner: so you're saying onion TLV would not use varint length? That seems weird. 19:53:12 oops, laptop was having issues and just got back online from fc...guess I missed a second meeting in a row :( 19:53:40 BlueMatt: still discussing TLV bikesheds, feel free to join the fight 19:54:00 rusty: the wire protocol needs to handle fields up to 65kb, the onion doesn't. they can use the same algo, just have different parameters 19:54:05 rusty: it doesn't need up to 4 bytes to signal the length, at most we can fit 1.3kb or so 19:54:16 bitconner: def 19:54:43 Yeah, and I feel uncomfortable allowing even anything >256 in the onion 19:54:45 bitconner: the efficiency difference is literally the case where you have 253, 254 or 255 bytes then? 19:55:11 But ok, var_int type and var_int length to get a unified TLV spec for wire and onion sounds like a tradeoff 19:56:22 Ok, we're coming up on time, we have 3 options here: continue discussing here, continue discussing on the issue, or accepting it 19:56:44 cdecker: I think there's a valid case for jup to 512 bytes. But yeah, bikeshedding. 19:56:48 I think we don't have a quorum for the latter, so more discussion is needed 19:57:09 Does everybody agree on defering to the issue tracker? 19:57:42 Sure... can we discuss SURBS now? 19:57:58 Totally forgot about them 19:58:13 (BTW, randomly I looked at some JS implementations of BOLT11 decoding... it's a nightmare and I need to write many more test vectors for BOLT11) 19:58:19 #agreed everybody to weigh in on the discussion of PR #572 19:58:42 #topic SURBs as a Solution for Protocol-Level Payment ACKs 19:58:51 cdecker: agreed 19:59:04 in the future we should really rip out the bech32 from bolt11, it makes things soooo much more complicated vs just a byte buffer that uses base32 19:59:37 The surbs mail refers a lot to the EOB proposal by roasbeef, which I just created a counter-proposal for 19:59:43 since we really don't get much from bech32 since we're encoding such large strings, with the switch over, visually it can stay the same 19:59:55 does it? they're distinct cdecker 20:00:01 (I was planning to send that mail much earlier, but didn't finish it in time) 20:00:10 cdecker: yeah, I thought they were indep? 20:00:32 They are, I'm just re-reading the SURBS mail and got confused :-) 20:00:36 kek 20:01:15 you mean the EOB stuff cdecker ? 20:01:25 flat encoding vs multi-hop encoding 20:01:52 Note: I'm not sure we'd use SURBs, since we have a much more reliable backwards transmission mechanism than attempting a random backroute, but they are *cool* and we should discuss them. 20:02:01 the SRUB stuff is a possible candidate for the soft errors/acks, tho it's more invasive that re-using the error tunnel 20:02:09 is it more reliable? 20:02:20 the bakcroute isn't random, then sender selects it 20:02:32 Oh hey, we don't actually need to re-introduce the e2e packet in the onion format itself, we can just add another field in parallel to the onion that also gets encrypted 20:02:57 But I honestly don't get why SURBs are better than the current error-return mechanism 20:03:00 roasbeef: pick a random route today and watch it fail, though. But you know the forward route succeeded. 20:03:01 yeh that's what this is, it's a bckwards packet 20:03:19 cdecker: from a traffic analsysi perspctive, it's indistinguishable from regular payments 20:03:40 as they're the same size, so there's mroe cover traffic for all 20:04:12 roasbeef: well, it's totally distinguishable because real payments follow a ->, <-, -> pattern, but that can be fixed. 20:04:14 rusty: sorry still not following, random route for what? the backwards route? 20:04:15 Ok, so why not just add a SURB to add_htlc that also gets processed like the rest of the onion (layer peeling) 20:04:38 roasbeef: yeah 20:04:58 rusty: if this was also the error source acks and errors look the same more or less, and that amongst the possible probing traffic as well 20:05:16 it's not random though...the receiver doesn't know who the sender is, how can they pick a route back to them? 20:05:18 roasbeef: ah, agreed with that. 20:05:31 cdecker: even more packt expansion 20:05:36 roasbeef: they don't need to, they just reply. It's implied. 20:05:54 roasbeef: and everyone has financial incentive to keep that route working. 20:06:08 not that this is more invasive than re-using the error tunnel stuff atm, it just occured to me that we revmoed this featur ehwen we didn't have a use, but imo it's a better way to send back errors than the other thing we came up w/ 20:06:12 note* 20:06:39 roasbeef: I disagree, it's far less reliable in practice than inline soft errors. 20:06:47 why is it less reliable? 20:06:51 roasbeef: but, better for the network because more traffic. 20:07:02 roasbeef: because most routes fail. 20:07:17 why would this fail? it's independent of payment bandwidth along the route 20:07:31 there's no payment for the ack, it can proceed on any path 20:07:41 doesn't even need to be the same as the forward path, just like hornet 20:08:42 roasbeef: but in practice it will be the same, because, as I keep saying, if you pick a random route it often fails. Ask niftynei or cdecker about their rate of success with probes.... 20:09:02 probes don't apply here 20:09:07 probes fail due to insufficinet paymetn bandwidth 20:09:11 this isn't sending a new payment 20:09:23 this is an ack 20:09:24 roasbeef: they also fail because nodes are down. 20:09:42 Or temporarily disconnected. 20:09:47 sure just make sure the backwards route if it isn't the same goes thru active nodes 20:10:50 My probes fail mostly due to temporary failures, not insufficient capacity... 20:10:50 this can also be a diff way to do rendezvous that lets you do max hops in both directions 20:10:53 roasbeef: a peers visibility of the network is only going to get worse in future. I don't think we can do that. 20:11:07 using a diff route isn't required, it's an option 20:11:22 roasbeef: an option nobody will use, ebcause it won't work. See ^cdecker 20:11:28 /shrug 20:11:30 up to the sender 20:12:19 Well, I think we should add soft errors, and keep SURBs up our sleeves in case we find a better use. I really like them, I just don't think they're going to win here :( 20:13:06 Agreed, they're a really cool tool, but one that we don't have a necessity for right now 20:13:17 (believe me I like having tools :-D) 20:13:44 Ok, seems like we reached 14 minutes overtime, shall we call it a day? 20:14:40 Any last objections? 20:14:44 I think so... 20:14:54 #agreed everybody to discuss SURBS on the mailing list 20:15:27 And I'm looking forward to the feedback to the multi-frame proposals (both mine and roasbeef's) on the ML and the issue trackers 20:15:38 #endmeeting