19:00:17 <cdecker> #startmeeting
19:00:17 <lightningbot> Meeting started Mon Feb 18 19:00:17 2019 UTC.  The chair is cdecker. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:17 <lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
19:00:47 <cdecker> #info Agenda for today can be found here https://github.com/lightningnetwork/lightning-rfc/issues/574
19:00:57 <kanzure> hi.
19:01:03 <cdecker> Hi kanzure
19:01:26 <rusty> And search by tag: https://github.com/lightningnetwork/lightning-rfc/labels/2019-02-18
19:01:32 <cdecker> Oh, by the way please tell me if I should remove someone from the ping list (don't want to wake people up accidentally)
19:02:26 <cdecker> Very short agenda today, let's see if we can get a quorum of all implementations and we should be done quickly
19:02:37 <rusty> Added https://github.com/lightningnetwork/lightning-rfc/pull/570
19:02:39 <lndbot> <johanth> hi
19:02:46 <cdecker> Hi johanth
19:03:00 <bitconner> hi johan :)
19:03:35 <cdecker> Ok, so we have Johan from LL, sstone from ACINQ, rusty, niftynei and me from BS
19:03:44 <cdecker> Does that count as quorum?
19:03:58 <bitconner> sounds about right
19:04:04 <cdecker> Cool
19:04:16 <cdecker> #topic Tagging v1.0
19:04:36 <rusty> https://github.com/lightningnetwork/lightning-rfc/pull/570 Is so nice, I (somehow) acked it twice!
19:04:45 <cdecker> So we wanted to tag v1.0 last time, but there was an issue with the onion test vectors if I remember correctly
19:05:04 <rusty> https://github.com/lightningnetwork/lightning-rfc/milestone/2
19:05:45 <rusty> Yes, the intermediary results had not been updated (the final onion result was correct).  I went through and validated it against our implementation, and it matches.
19:05:51 <cdecker> Are we sure that the routing_info[4] unencrypted is correct? IMHO that should have the filler bytes at the end
19:07:00 <cdecker> ok, if rusty checked it I'm satisfied :-)
19:07:25 <cdecker> So any objections to applying PR #570?
19:07:34 <sstone> lgtm :)
19:07:57 <bitconner> utACK :)
19:08:08 <cdecker> #agreed cdecker to merge PR #570
19:08:27 <cdecker> Sounds good, so that should resolve the last open issue for v1.0
19:08:38 <cdecker> Anything that we need to do before tagging?
19:08:58 <lndbot> <johanth> \o/
19:09:14 <rusty> cdecker: Not that I know of.... I will merge and tag immediately after meeting, so anything else we approve here goes *post* v1.0.
19:09:43 <cdecker> Ok, if something comes up, please let me know, otherwise rusty can merge and tag
19:09:59 <cdecker> #agreed rusty to tag v1.0 after merging #570
19:10:36 <cdecker> #topic PR #539: BOLT 3: add test vectors for htlc-transactions in case where CLTV is used as tie-breaker for sorting
19:10:43 <cdecker> Link:
19:10:45 <cdecker> https://github.com/lightningnetwork/lightning-rfc/pull/539
19:11:41 <araspitzu> so, i reworked the PR because initially i took too much liberty redefining HTLC #1 and #2, now it should be okay but the vectors need a second ACK
19:12:03 <cdecker> rusty and araspitzu: am I correct in assuming you cross-checked the results and this is to be merged?
19:12:05 <rusty> #action rusty to validate PR 539 test vectors.
19:12:15 <rusty> cdecker: no, still a TODO....
19:12:36 <araspitzu> note that after @rusty's comment the content of the vector changed
19:12:42 <cdecker> Ok, so this is more of a call to action to discuss the issue then :-)
19:12:44 <roasbeef> at some point, i think we should aim to make all the test vectors machine readable and human parseable
19:12:51 <cdecker> Oh hi roasbeef :-)
19:12:54 <rusty> But I agree that it's a really nice addition...
19:12:54 <roasbeef> rn you need to do a ton of copy/paste
19:12:59 <rusty> roasbeef: :+1:
19:13:04 <cdecker> Absolutely
19:13:07 <roasbeef> one mega copy/paste would be nice
19:13:10 <Chris_Stewart_5> :+1:
19:13:18 <araspitzu> agreed, IIRC a json based approach was suggested the last time?
19:13:24 <cdecker> (I have a json onion spec format for my onion proposal, so that should make everybody happy)
19:13:26 <roasbeef> yeh many moons ago...
19:13:49 <Chris_Stewart_5> it might be worth looking at the bitcoin core test vectors for some guidance of structure.. at least some people have already written parsing code for that
19:13:52 <rusty> araspitzu: yes, I have a JSON spec start, but that's more for actual conversations between peers.  MIght be nice to have variants for this too.
19:14:04 <cdecker> #agreed everybody to aim for machine-readable test-vectors (in JSON)
19:14:31 <rusty> I think we should move them out of the docs proper, too, FWIW.  JSON in a spec doc seems a bit weird?
19:14:49 <rusty> (eg test_vectors/01-xxx.json)
19:14:54 <cdecker> Yep, separate JSON docs that can be references would be nice
19:15:02 <niftynei> rusty :+1:
19:15:11 <rusty> Then we can add new files rather than editing existing ones, too.
19:15:11 <bitconner> 1.1 goals?
19:15:18 <cdecker> I was going for boltxy/test-x-y-z.json to group by bolt
19:15:47 <cdecker> Ok, but I think we digress slightly (also we seem to agree on this 110%)
19:15:49 <rusty> cdecker: nice.  Of course, format for test vectors will be a little ad-hoc, since they test different things, but we can get closer.
19:16:05 <rusty> I'll open an issue to track...
19:16:19 <cdecker> On to feature bit unification :-)
19:16:34 <cdecker> #topic PR #571 Feature bit unification and assignment
19:17:34 <cdecker> Should be rather uncontroversial, except for the newly allocated feature bits
19:18:13 <cdecker> rusty: you just went through the list and assigned bits right? No attempt to group them by topic or anything like that?
19:18:26 <rusty> cdecker: yep, I just walked the list on the wiki....
19:18:27 <cdecker> (I fully support that, just making sure)
19:18:32 <cdecker> Sounds good
19:18:44 <roasbeef> i still don't get the rationale, w.r.t the unification/split
19:18:46 <roasbeef> it's just renaming?
19:19:06 <sstone> and also combining
19:19:19 <roasbeef> but why?
19:19:31 <rusty> roasbeef: it's just renaming, but also making sure the numbers don't clash, so we can advertize them both together in node_announce.
19:19:42 <rusty> roasbeef: people want to search for nodes with certain peer features.
19:20:04 <rusty> eg. option_dataloss_protect. Orgiinally we werent' going to advertize those, since it's a local matter
19:22:07 <roasbeef> no it's a global thing also, i can fitler thos peers out to connect to them
19:22:08 <cdecker> Might be an idea to merge channelfeatures and peerfeatures altogether (hard since it's the `init` message)
19:22:12 <bitconner> what's the rationale behind advertising peer+channel features on connection? shouldn't peer be all that matters?
19:22:18 <roasbeef> most local/global things also make sense on a global level, as you need to filter out peers to connect to them
19:22:21 <rusty> There's one hole as the spec is currently proposed.  If there's a channelfeature you need, and you only know about the channel because of a bolt11 route hint, there's no way to tell you.  You'll try and get a required_channel_feature_missing error
19:22:57 <roasbeef> so it combines the name spaces also?
19:22:58 <rusty> roasbeef: the original "global" was for routing, "local" was for connecting.  Agreed you want to advertize nodes on both.
19:23:06 <rusty> roasbeef: ack.
19:23:55 <cdecker> Great moar profilability :-)
19:24:16 <araspitzu> concept ACK and very welcome
19:24:40 <roasbeef> rationale is still unclear to me
19:26:00 <rusty> roasbeef: combined namespace because as you agree you want to advertize both.  But they're still separate:  there's a difference between "you can't gossip with me" and "you can't route through me".
19:26:46 <rusty> Hence global/channel features are all you need to see in channel_announce.
19:26:58 <rusty> We already have both when you connect to the peer, in init msg.
19:27:38 <cdecker> We should have made this a flat bitfield right from the getgo, having multiple bitfields with different interpretations is really confusing
19:28:14 <roasbeef> flat bitfield would mean they share a feature space, when they're really distinct
19:28:15 <rusty> cdecker: <shrug> maybe the distinction will prove moot.  Maybe if you're so old you can't route through me, you should just damn well ugprade.
19:29:35 <rusty> Well, one option would be to not distinguish; if you don't understand a bit, just don't deal with me at all. If I want to make option_dataloss_protect compulsory on my node, and ancient nodes stop routing through me, do we care?
19:29:44 <cdecker> Mhe, it should a put-your-flags-here scratchspace, and when creating any message I can decide what I'm going to tell you about :-)
19:30:15 <cdecker> But anyhow, too late for that, we're left with the partitioned bitfields
19:30:26 <rusty> cdecker: ick, that's *really* nasty for a test matrix!  I suddently turn on a feature bit halfway through a conversation...
19:30:57 <cdecker> rusty: not really, the only time I tell you about my features is in the `init` message, so you get one shot for that
19:31:28 <rusty> (cdecker: oh, I thought you were suggesting attaching a feature bitfield to every message!)
19:31:29 <cdecker> Anyhow, seems to me like PR #571 needs some more discussion on the tracker?
19:31:32 <rusty> But I guess the q. is, are we happy with the draft assignments?  We can append more, but it's nice to be able to code against something which feels tangible...
19:32:00 <roasbeef> i don't think the channel features rename makes sense, for example we may switch the way we do the handshake in the future and i may need to fish that out of the global feature
19:32:03 <cdecker> Yeah, I'd definitely like to have the bits sorted out, even without the renameing and combining
19:32:12 <roasbeef> aside from the rename, i don't really see much here (skimmed the PR a few times)
19:32:47 <roasbeef> ohh, it combines them
19:32:57 <cdecker> Yep
19:33:03 <rusty> roasbeef: yep, for *node announce* only.
19:33:29 <sstone> rusty: I like the PR but maybe without bits for features that are not really specified yet. and if we change he handshake we'll have the same pb ?
19:33:40 <roasbeef> so now they must select distinct bits?
19:33:46 <rusty> roasbeef: yeah...
19:34:11 <cdecker> sstone: I'd really like to have bits assigned so we can actually start working on this stuff
19:34:18 <rusty> (We don't have a great place to put meta stuff in the spec, like "when selecting bits in future, don't clash!")
19:35:08 <cdecker> Hm, maybe we should have that then
19:35:36 <cdecker> BOLT 09 seems like an excellent place for such a thing
19:35:40 <sstone> cdecker: ok, the current selection is consistent except maybe for option_fec_gossip
19:36:16 <roasbeef> i don't think bits need to be assigned before we can start working on anything
19:36:20 <cdecker> Hm, why is that? sstone
19:36:24 <roasbeef> just select a random bit on testnet, gg
19:36:39 <roasbeef> the bigger q is how bits are assinged in the first place
19:36:48 <roasbeef> and if there're reserved sections, etc
19:36:57 <cdecker> Yeah right, and then we get the funny clashes we had when lnd joined the spec testnet xD
19:37:02 <roasbeef> there's a ton of bits, so don't see it being an issue in the near future, but there's no guidance rn
19:37:20 <roasbeef> extra tests for your implementation ;)
19:37:32 <cdecker> Well, I really dislike the way BIP numbers are assigned (with gaps and grouped by topics)
19:37:38 <rusty> roasbeef: I think we should be pretty free with assigning them, we can always reassign if features never get merged and we want to.
19:37:41 <sstone> cdecker: because it's the only feature that I don't think we'll get (will be very happy if s.o proves me wrong :))
19:37:55 <cdecker> We should just incrementally assign the bits so that we end up with a compact representation when serialized
19:38:10 <roasbeef> do we need a blockchain for this?
19:38:19 <rusty> roasbeef: LOL
19:38:31 <roasbeef> who comes "first"?
19:38:33 <roasbeef> kek
19:38:39 <rusty> I think generally people should self-assign when they create a PR for the spec, and we do final disambiguation at merge tiime (which, given BOLT 9 clashes, will be obvious)
19:38:40 <cdecker> The one with the LN Torch!
19:39:13 <cdecker> This is group therapy and the torch is out talking pillow xD
19:39:49 <cdecker> sstone: I think there are quite a few features in there that we're unlikely to see realized (2p_ecdsa for example)
19:40:04 <rusty> ? #action Rusty to write BOLT-BOLT proposing meta stuff like how to assign feature bits?
19:40:25 <sstone> ok then I guess we can always reassign
19:40:27 <niftynei> "This is group therapy and the torch is out talking pillow xD" <3
19:40:30 <rusty> (I actually just assigned them in the PR there to avoid this bikeshed, but there we are :)
19:40:42 <cdecker> Ok, it seems we need to bikeshed a bit more on #571, would you agree?
19:41:02 <rusty> Yep... but FWIW, I'll use those draft feature bits in my test code :)
19:41:15 <cdecker> Sounds good
19:41:25 <cdecker> So let's move on
19:41:38 <cdecker> #agreed everyone to bikeshed on #571 a bit more
19:41:58 <cdecker> #topic PR #572 Specify tlv format, initial MPP
19:42:05 <rusty> Connor is not here to defend TLV; roasbeef attack!
19:42:19 <bitconner> rusty: i'm here
19:42:49 <cdecker> I kind of dislike even allowing >256 byte values
19:43:28 <rusty> cdecker: hmm, for the length?  We could simply reserve >= 253
19:43:31 <cdecker> So for me `var_int` for the length feels really weird
19:43:40 <cdecker> Yeah
19:43:56 <cdecker> We have 1300 bytes of total payload we can split up along the route
19:44:25 <rusty> cdecker: it's even more awkward if we need it later though.  255 bytes is not *quite* enough.  And this can be used in other msgs, which don't have so much space restriction.
19:44:28 <roasbeef> 1 byte type seems very limiting, would also say tlv on the wire protocol has more impact than the onion atm
19:44:29 <cdecker> Having a single value consume 20% of that is a bit much (and might cause route-length issues if people really start using it)
19:44:37 <roasbeef> tlv should also be optional in the onion as well
19:44:58 <rusty> roasbeef: it is... a 0 byte terminates it, so if first byte is 0, it's not there.
19:45:02 <roasbeef> which means we need a length indicator for each cell/blob, or some padding scheme
19:45:30 <cdecker> TLV is pretty much what gates us for multi-part payments, rendez-vous and spontaneous payments, so I do think that TLV is rather urgent
19:45:56 <rusty> (Though note this PR doesn't talk about *how* this gets encoded in the onion; that's up to the multi-frame proposals...)
19:46:01 <cdecker> Wait, the 0-byte is useable for padding
19:46:07 <roasbeef> all that work can already be underway and proceeed concurrently really
19:46:29 <roasbeef> but for the cases that want a compact encoding, the tlv just wastes a ton of space, given that there's 32/65 bytes available per hop
19:46:29 <bitconner> why does even/odd need to be in the encoding, rather than in feature bit neogitation
19:46:29 <cdecker> Yeah, but we'd be reinventing the wheel when it comes to serialization formats
19:46:40 <roasbeef> but if there's no outside type byte, imo you need 2 bytes for the type
19:46:44 <rusty> roasbeef: wastes 1 byte you mean?
19:46:58 <roasbeef> if the outside type byte is there (tlv optional) then there's more working room
19:47:21 <cdecker> What outside type byte? Where is that signaled?
19:47:24 <roasbeef> which gets us to 65k combos which should be nough, 256 seems too small given app developer interest in using this space
19:47:27 <cdecker> Do you mean the realm?
19:47:46 <roasbeef> imo the realm should be kept in tact, and another byte from the nion padding used to signal the type
19:48:01 <roasbeef> just for clean seperation, "this is the chain", "this is the metadata"
19:48:02 <rusty> roasbeef: yes, that's exactly what TLV *is*.
19:48:10 <roasbeef> ?
19:48:21 <rusty> the byte used to signal the type is the T in TLV :)
19:48:30 <roasbeef> yes, 256 isn't enough imo
19:48:37 <roasbeef> or 55 w/e
19:49:33 <cdecker> Ok, we can make the type `var_int` (but I doubt we're ever going to hit that)
19:49:35 <bitconner> imo onion tlv should be considered separate from wire tlv. not that i want the extra complexity, but they have diff constraints
19:50:00 <roasbeef> either two bytse there, or the outside type, don't think var int needed there
19:50:01 <rusty> roasbeef: I really can't see more than a few dozen.  If app level want to do something, they need to signal it somehow anyway.
19:50:21 <roasbeef> rusty: i constantly get messages from devs asking hwo they can use the space for their applications
19:50:37 <rusty> bitconner: I was trying to start with general TLV stuff, then go into the onion.  I really don't want multiple TLVs if we can avoid it.
19:50:47 <roasbeef> you may not see the possibilities, but the devs do
19:50:52 <rusty> roasbeef: yeah, me too.
19:51:17 <roasbeef> the two areas have diff constraints (wire vs onion)
19:51:19 <cdecker> Well then tell them to use TLV + different realm that gives you 2^4 * 256 possible types
19:51:58 <roasbeef> type byte would be 65k, breaking off bits from real seems unecessary
19:52:12 <bitconner> rusty: tbh the only that would really change between the two is the varint (i think?), which is just a matter of changing constants
19:52:40 <rusty> roasbeef: we can certainly assign a "end2end" type pair, and they can put whatever they want in there.
19:53:11 <rusty> bitconner: so you're saying onion TLV would not use varint length?  That seems weird.
19:53:12 <BlueMatt> oops, laptop was having issues and just got back online from fc...guess I missed a second meeting in a row :(
19:53:40 <cdecker> BlueMatt: still discussing TLV bikesheds, feel free to join the fight
19:54:00 <bitconner> rusty: the wire protocol needs to handle fields up to 65kb, the onion doesn't. they can use the same algo, just have different parameters
19:54:05 <roasbeef> rusty: it doesn't need up to 4 bytes to signal the length, at most we can fit 1.3kb or so
19:54:16 <roasbeef> bitconner: def
19:54:43 <cdecker> Yeah, and I feel uncomfortable allowing even anything >256 in the onion
19:54:45 <rusty> bitconner: the efficiency difference is literally the case where you have 253, 254 or 255 bytes then?
19:55:11 <cdecker> But ok, var_int type and var_int length to get a unified TLV spec for wire and onion sounds like a tradeoff
19:56:22 <cdecker> Ok, we're coming up on time, we have 3 options here: continue discussing here, continue discussing on the issue, or accepting it
19:56:44 <rusty> cdecker: I think there's a valid case for jup to 512 bytes.  But yeah, bikeshedding.
19:56:48 <cdecker> I think we don't have a quorum for the latter, so more discussion is needed
19:57:09 <cdecker> Does everybody agree on defering to the issue tracker?
19:57:42 <rusty> Sure... can we discuss SURBS now?
19:57:58 <cdecker> Totally forgot about them
19:58:13 <rusty> (BTW, randomly I looked at some JS implementations of BOLT11 decoding... it's a nightmare and I need to write many more test vectors for BOLT11)
19:58:19 <cdecker> #agreed everybody to weigh in on the discussion of PR #572
19:58:42 <cdecker> #topic SURBs as a Solution for Protocol-Level Payment ACKs
19:58:51 <bitconner> cdecker: agreed
19:59:04 <roasbeef> in the future we should really rip out the bech32 from bolt11, it makes things soooo much more complicated vs just a byte buffer that uses base32
19:59:37 <cdecker> The surbs mail refers a lot to the EOB proposal by roasbeef, which I just created a counter-proposal for
19:59:43 <roasbeef> since we really don't get much from bech32 since we're encoding such large strings, with the switch over, visually it can stay the same
19:59:55 <roasbeef> does it? they're distinct cdecker
20:00:01 <cdecker> (I was planning to send that mail much earlier, but didn't finish it in time)
20:00:10 <rusty> cdecker: yeah, I thought they were indep?
20:00:32 <cdecker> They are, I'm just re-reading the SURBS mail and got confused :-)
20:00:36 <roasbeef> kek
20:01:15 <roasbeef> you mean the EOB stuff cdecker ?
20:01:25 <roasbeef> flat encoding vs multi-hop encoding
20:01:52 <rusty> Note: I'm not sure we'd use SURBs, since we have a much more reliable backwards transmission mechanism than attempting a random backroute, but they are *cool* and we should discuss them.
20:02:01 <roasbeef> the SRUB stuff is a possible candidate for the soft errors/acks, tho it's more invasive that re-using the error tunnel
20:02:09 <roasbeef> is it more reliable?
20:02:20 <roasbeef> the bakcroute isn't random, then sender selects it
20:02:32 <cdecker> Oh hey, we don't actually need to re-introduce the e2e packet in the onion format itself, we can just add another field in parallel to the onion that also gets encrypted
20:02:57 <cdecker> But I honestly don't get why SURBs are better than the current error-return mechanism
20:03:00 <rusty> roasbeef: pick a random route today and watch it fail, though.  But you know the forward route succeeded.
20:03:01 <roasbeef> yeh that's what this is, it's a bckwards packet
20:03:19 <roasbeef> cdecker: from a traffic analsysi perspctive, it's indistinguishable from regular payments
20:03:40 <roasbeef> as they're the same size, so there's mroe cover traffic for all
20:04:12 <rusty> roasbeef: well, it's totally distinguishable because real payments follow a ->, <-, -> pattern, but that can be fixed.
20:04:14 <roasbeef> rusty: sorry still not following, random route for what? the backwards route?
20:04:15 <cdecker> Ok, so why not just add a SURB to add_htlc that also gets processed like the rest of the onion (layer peeling)
20:04:38 <rusty> roasbeef: yeah
20:04:58 <roasbeef> rusty: if this was also the error source acks and errors look the same more or less, and that amongst the possible probing traffic as well
20:05:16 <roasbeef> it's not random though...the receiver doesn't know who the sender is, how can they pick a route back to them?
20:05:18 <rusty> roasbeef: ah, agreed with that.
20:05:31 <roasbeef> cdecker: even more packt expansion
20:05:36 <rusty> roasbeef: they don't need to, they just reply.  It's implied.
20:05:54 <rusty> roasbeef: and everyone has financial incentive to keep that route working.
20:06:08 <roasbeef> not that this is more invasive than re-using the error tunnel stuff atm, it just occured to me that we revmoed this featur ehwen we didn't have a use, but imo it's a better way to send back errors than the other thing we came up w/
20:06:12 <roasbeef> note*
20:06:39 <rusty> roasbeef: I disagree, it's far less reliable in practice than inline soft errors.
20:06:47 <roasbeef> why is it less reliable?
20:06:51 <rusty> roasbeef: but, better for the network because more traffic.
20:07:02 <rusty> roasbeef: because most routes fail.
20:07:17 <roasbeef> why would this fail? it's independent of payment bandwidth along the route
20:07:31 <roasbeef> there's no payment for the ack, it can proceed on any path
20:07:41 <roasbeef> doesn't even need to be the same as the forward path, just like hornet
20:08:42 <rusty> roasbeef: but in practice it will be the same, because, as I keep saying, if you pick a random route it often fails.  Ask niftynei or cdecker about their rate of success with probes....
20:09:02 <roasbeef> probes don't apply here
20:09:07 <roasbeef> probes fail due to insufficinet paymetn bandwidth
20:09:11 <roasbeef> this isn't sending a new payment
20:09:23 <roasbeef> this is an ack
20:09:24 <rusty> roasbeef: they also fail because nodes are down.
20:09:42 <rusty> Or temporarily disconnected.
20:09:47 <roasbeef> sure just make sure the backwards route if it isn't the same goes thru active nodes
20:10:50 <cdecker> My probes fail mostly due to temporary failures, not insufficient capacity...
20:10:50 <roasbeef> this can also be a diff way to do rendezvous that lets you do max hops in both directions
20:10:53 <rusty> roasbeef: a peers visibility of the network is only going to get worse in future.  I don't think we can do that.
20:11:07 <roasbeef> using a diff route isn't required, it's an option
20:11:22 <rusty> roasbeef: an option nobody will use, ebcause it won't work.  See ^cdecker
20:11:28 <roasbeef> /shrug
20:11:30 <roasbeef> up to the sender
20:12:19 <rusty> Well, I think we should add soft errors, and keep SURBs up our sleeves in case we find a better use.  I really like them, I just don't think they're going to win here :(
20:13:06 <cdecker> Agreed, they're a really cool tool, but one that we don't have a necessity for right now
20:13:17 <cdecker> (believe me I like having tools :-D)
20:13:44 <cdecker> Ok, seems like we reached 14 minutes overtime, shall we call it a day?
20:14:40 <cdecker> Any last objections?
20:14:44 <rusty> I think so...
20:14:54 <cdecker> #agreed everybody to discuss SURBS on the mailing list
20:15:27 <cdecker> And I'm looking forward to the feedback to the multi-frame proposals (both mine and roasbeef's) on the ML and the issue trackers
20:15:38 <cdecker> #endmeeting