19:00:22 #startmeeting 19:00:22 Meeting started Tue Nov 19 19:00:22 2019 UTC. The chair is aj. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:22 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:00:35 Hi 19:00:38 hi 19:00:38 hi 19:00:39 hi 19:00:48 hi 19:00:50 hi 19:01:37 It's only me from group 5 here today, so my questions are not filtered through our pre-Q&A-meeting. 19:01:43 hi 19:02:03 Q 1/3: Why isn't implicit Y a reduction in security? I don't get the explanation given in last paragraph of "Implicit Y coordinates" or in footnote 6. Is it so that given a privkey q for a pubkey P with explicit Y, one can easily calculate the privkey p' for the pubkey -P? 19:03:33 kanzure: https://medium.com/blockstream/reducing-bitcoin-transaction-sizes-with-x-only-pubkeys-f86476af05d7 19:03:38 eh, kabaum ^ 19:03:47 hi 19:04:33 kabaum: also https://bitcoin.stackexchange.com/a/90120/208 19:04:35 kabaum: but basically yes. Inverting the public key is just computing the negation of a 256 bit number (the y coordinate) so that's very easy. 19:04:40 hi 19:05:07 nickler: you keep bringing that up as argument, but i don't think that's relevant 19:05:23 even if it cost an EC multiplication to do so, x-only would still not be a reduction in security 19:07:08 still I want to point out that a third party can easily do that, no access to the secret key is required 19:07:47 right 19:08:34 sipa: Got it. The stackexchange answer helped. 19:08:47 Q: both e and R style signatures seem to be rationalizable appealing to orphan rates, network latency for the former, validation cpu cost for the latter, but it's not clear how to compare these. what is the argument for favoring R and batch verification, especially on a longer time horizon? is it empirical? 19:09:05 kabaum: another way to look at it: the fact that you can negate a public key (which negates the corresponding private key) is indeed going to help an attacker... but it *always* does that, whether public keys are x-only or full x/y 19:09:40 sipa: thanks. 19:10:17 kabaum: FWIW, secp256k1 has another "efficiently computable endomorphism" (which is what this negation gives you), namely multiplication with a specific constant lambda... which also always helps an attacker 19:10:34 nothingmuch: the argument is that batch verification is a significant speedup. It's empirical in the sense that we've implemented and measured it (that's the graph in the bip) 19:12:45 kabaum: the expected number of EC additions an attacker needs to do with a DLP breaking algorithm is proportional to sqrt(p/m) where p is the largest prime factor of the group order, and m is the number of efficiently computable endomorphism (m=6 for secp256k1, but the efficient negation is part of it) 19:14:21 nothingmuch: the "be made as small as 16 bytes" should probably get a qualifier "in some security models" 19:17:38 nickler: My next Q is related to nickler's answer. 19:17:39 Q 2/3: Is the graph "Batch signature verification in libsecp256k1" made using the bip-schnorr scheme or using some other/generic Schnorr scheme? 19:19:03 kabaum: it's the bip-schnorr implementation in libsecp 19:19:34 this one https://github.com/bitcoin-core/secp256k1/pull/558 19:20:05 nickler: well, it's from a pre-x-only version of that PR, right? 19:20:18 here is a graph that's based on more lowlevel benchmarking: http://bitcoin.sipa.be/speedup-batch.png 19:20:38 which only takes the multi-exp into account and not the other overhead of verifying a signature 19:20:41 aj: right, but it shouldn't make a visible difference 19:21:15 but I should try to reproduce the numbers just to be sure :) 19:22:11 perhaps s/I/someone to reduce systematic bias 19:22:12 nickler: that'd be interesting. 19:22:30 Q 3/3: Why is batch verification of u signatures more efficient than verifying them separately? I counted the number of point multiplications and additions. For individual verification I get 2u mult and u add. For batch I get 2u mult and 2u-1 add. Also the amount of hashing seems similar in batch and separate verification. What am I missing? 19:22:56 kabaum: Strauss' algorithm 19:23:05 (and for larger numbers, Pippenger's algorithm) 19:23:39 it turns out that x*A + y*B can actually be computed more efficiently in one go than computing (x*A) and (y*B) separately and adding them up 19:25:34 kabaum: very broadly, the number of EC additions you need to do to implement n EC multiplications grows with n/log(n), rather than n 19:25:44 (if you only care about their sum) 19:27:14 kabaum: https://cdn.preterhuman.net/texts/cryptography/Hankerson,%20Menezes,%20Vanstone.%20Guide%20to%20elliptic%20curve%20cryptography%20(Springer,%202004)(ISBN%20038795273X)(332s)_CsCr_.pdf section 3.3.3 19:27:26 i can try to give an intuition for that here too, if there are no other questions 19:27:51 "Guide to Elliptic Curve Cryptography" by Hankersen, Menezes, Vanstone 19:28:13 sipe: I'd love that. 19:29:16 nickler: Strauss isn't mentioned in that document. 19:29:16 i jumped the gun with my question, but it's still outstanding, but an intuition for batch verification seems relevant to it anyway 19:29:23 imagine you're trying to compute 19*P 19:30:05 kabaum: section 3.3.3 19:30:15 one way to do that is to compute 2P = P+P, 4P = 2P+2P, 8P = 4P+4P, 16=8P+8P, and then sum P+2P+16P 19:30:24 they don't call it strauss but shamir's trick 19:30:42 right? 19:31:00 yes! 19:31:54 key prefixing means we lose the pubkey recovery property, right? 19:32:00 waxwing: correct 19:32:14 how do we feel about that. 19:32:37 i think key aggregation is more important that pubkey recovery 19:32:38 oh i see it's in footnote 3 19:33:28 kabaum: so you're doing 4 doublings (because the largest power of two is 16P), and then 1 addition for every 1-bit in the binary representation of 16 19:34:24 but say you want to compute 19P + 23Q, you're just going to do twice the amount of work now (two times 4 doublings, and an addition for every 1 bit in 19 and 23) 19:34:40 agree? 19:34:53 Agree! 19:35:03 so that's not a good approach 19:35:09 waxwing: you can do pubkey recovery with partial sigs with the same construction though; claim the complete key is P+-P+G, so you're going for sG=R+H(R,G,m)*G and calculate a partial signature s1,R1 s1*G = R + H(R,G,m)*P and P is recoverable 19:36:15 kabaum: an alternative is working backwards: write 19 as 11001 in binary, and compute it as dbl(dbl(dbl(dbl(P)))+P)+P) 19:36:25 this is the exact same number of additions/doublings 19:37:16 but instead of precomputing all power-of-two's times P first, and then adding them up, you use an "accumulator" that you add P into every time a "1" bit is encountered, and you double for every bit 19:37:45 aj, hmm interesting, that's confusing, so you're talking about like a multisig scenario? but would that work with musig? 19:37:46 this is actually the same algorithm as https://en.wikipedia.org/wiki/Exponentiation_by_squaring, expect addition/doubling instead of multiply/square 19:38:10 sipa: did you write 19 backwards in binary? 19:38:10 kabaum, nothingmuch: does this make sense? 19:38:23 no 19:38:25 waxwing: just abusing the api needed for partial sigs to allow for a protocol that wants pubkey recovery 19:38:44 kabaum: maybe it's easier to see if i write it as dbl(dbl(dbl(dbl(0+P)))+P)+P) 19:38:47 aj, i guess it depends on what situation you want to do pubkey recovery .. the one i was made aware of was people working in a constrained environment wanting to transfer without being explicit about a key. 19:38:53 constrained as in something like mesh network. 19:39:40 kabaum: oh, i'm confusing myself! 19:40:05 it's dbl(dbl(dbl(dbl(0+P)+P)))+P 19:40:17 the outer bits (11) become the inner part 19:40:38 maybe IRC isn't the best medium for this 19:40:40 sipa: roughly makes sense to me but i doubt i will fully comprehend this here, i'm slow on the uptake and probably need to work it out on paper 19:40:58 nothingmuch: i'm just giving background, i haven't explained it yet :) 19:41:10 there is a trailing ) on the last bullet point of "Default Signing" 19:41:10 Oh, it's 0x19, got that. Now I'm way behind you.... Catching up. 19:41:19 kabaum: no, 19 = 16 + 8 + 1 19:41:27 eh, 16 + 2 + 1 19:41:46 i need a whiteboard :) 19:42:19 cancel that there is not a trailing ) . doh. 19:44:00 kabaum: the number in binary is 10011 * P 19:44:07 start with 0P = 0 19:44:25 you see a 1 bit (the one in front), so you add P, and get 1P 19:44:38 you double, and get 2P 19:44:39 aj: fwiw the libsecp-zkp musig api wouldn't let you do that. Also, not committing to the pubkey allowsa an attacker to tweak the signature to be valid for another bip32 derived key 19:44:43 you see a 0 bit, so don't do anything 19:44:51 you double, and get 4P 19:44:56 you see a 0 bit, so don't do anything 19:44:56 you double, and get 8P 19:45:07 you see a 1 bit, so you add P, and get 9P 19:45:11 you double, and get 18P 19:45:14 you see a 1 bit, so you add P, and get 19P 19:45:29 done. 19:45:55 does this make sense? 19:46:10 nickler, yeah i haven't figured it out, as you have, but makes sense that that is not compatible with musig, as it's kind of a related key attack thing. 19:46:26 nickler: that means the api only does musig, not simple pubkey addition (with knowledge of discrete log) right? 19:47:01 aj, so you're basically arguing for people to still use base bip schnorr but have their own protocols (not in bitcoin itself, per se) that could do that kind of stuff? 19:47:30 sipa: so then to do both, it's just adding Q or P based on whether their respective bits are set into the same accumulator, and doubling once? 19:47:31 well in our musig api you have the concept of a session which keeps your nonce, others nonces and your secret key. If you call partial sig it will use mostly session information 19:47:40 nothingmuch: exactly! 19:47:44 and what did you gain? 19:47:49 waxwing: that it's conceivable yeah, not necessarily a good idea :) 19:47:54 :) 19:50:01 sipa: Didn't you just describe normal "adding and doubling" method? 19:50:20 kabaum: yes indeed 19:50:27 ok, then I follow. 19:51:27 so the next step is what nothingmuch said: if you want to compute say 19P + 25Q, where 19 is 10011 in binary, and 25 is 11001, you can merge the two computations 19:51:45 you start with 0P+0Q = 0 19:51:53 you see a 1 bit in both, so you add P and Q, and 1P+1Q 19:52:03 you double, and get 2P+2Q 19:52:32 you see a 1 bit in 25, so you add Q, and get 2P+3Q 19:52:48 you double, and get 4P+6Q 19:52:55 you see 0 bits in both and do nothing 19:53:00 you double, and get 8P+12Q 19:53:28 you see 1 bit in 19, so you add P, and 9P+12Q 19:53:35 you double, and get 18P+24Q 19:53:47 you see a 1 bit in both, you add P+Q, and get 18P+25Q 19:53:52 *19P+25Q 19:54:02 and in doing so, you've only doubled 4 times, not 8 times 19:56:53 i see you specify deterministic nonces but a very much simpler form than rfc6979 .. perhaps mention the latter in footnote 10? 19:57:23 debatable since it's not like rfc6979 was actually part of a bitcoin standard/bip before iirc 19:57:54 we all like 289264 invocation of the SHA256 compression functions to compute a nonce 19:58:06 also that bias being small is cool but maybe it would be nice to cite something. i remember 1 bit biases in nonces have been enough in theory before (lattice stuff) 19:58:27 waxwing: they are 19:58:30 6979 was kinda hard. but i mean ... Pornin is the man. 19:59:37 it's a 0.0000000000000000000000000000000000000058 bit bias here 19:59:49 ah right yeah, thought i might have made that error :) 19:59:54 i think we have a comment somewhere that the bias is unobservable 19:59:56 wait was it really 300K? lol 20:00:00 sipa: I got it! Thanks! You really made it simple. 20:00:08 waxwing: no, more like 15 20:00:19 waxwing: still, it's actually a nontrivial portion of the signing time 20:00:21 yeah you have that comment in 10 it's fine. and no need to comment rfc6979 i think. 20:00:40 i see. makes sense. i know there are some loops you can end up in but they're vanishingly unlikely. 20:03:03 sipa: i'd like to reiterate my question about e vs. R based signatures, and the rationale for preferring batch verification over compact signatures, and add to it 20:03:24 i think you could argue you don't need either the batch verification nor the optimisation sections in the BIP. 20:03:51 albeit it helps a lot to understand design decisions. 20:03:53 namely, is it fair to say that batch verification is necessarily serial? 20:04:01 nothingmuch: what does that mean? 20:05:16 sipa: can't be computed in parallel 20:05:25 i don't know what that means 20:05:40 i also don't understand, surely the whole point of batching is to do stuff in parallel 20:05:54 i mean given a batch, that the algorithm to do the whole batch is not concurrent 20:06:00 and can't benefit from multiple cores 20:06:14 nothingmuch: ah, in theory it could, but we don't have an implementation for that 20:06:28 however, you can split up the batch in 4 batches, and verify each on one core easily 20:06:58 that loses some of the scaling, but then if you had a parallelisable algo, it might too, i guess 20:07:06 i'm trying to better understand the compute vs. network latency tradeoff, the benefit from batching is obvious to me, but i'm not sure how it holds up against bandwidth cost, especially since witness data is discounted 20:07:48 or is the difference in security model the main rationale for preferring R over e, with batching being an extra benefit? 20:08:30 (without witness discount i can see that batch verification should be strictly better for orphan rates, but slightly reduced tx throughput) 20:09:04 pretty sure it's batching, the 'e' version is usually what's proved in security proofs isn't it. 20:09:29 oh but shortened .. ok. 20:11:53 so using a 128-bit e value works in some security models for Schnorr, but not all 20:12:03 (depending on which other assumptions on the group/hash are made) 20:12:58 nothingmuch: as for bandwidth, i think the relevant metrics are (a) worst case bandwidth usage and (b) worst case CPU per byte usage 20:13:24 and worst case bandwidth isn't affected by any of these proposals - it's close to 4M of data per block 20:13:42 while batch validation significantly improves CPU per vbyte 20:14:47 that makes sense 20:14:59 specifically, i believe the ROM/DL/forkinglemma proof for Schnorr indeed works with 128-bit hashes... but the generic group + collision resistant hash proof needs 256 bit hashes 20:16:57 we had another question in group 16 yesterday, which i attempted to answer but wanted to verify - is the reason for H(tag)||H(tag) being the prefix so that the compression function can already be applied given that the block size is double the image size? 20:17:45 yes, that's the reason for having a 64-byte prefix 20:17:57 though any other 64-byte prefix could be picked 20:18:06 the reason for doubling the hash of the tag is given in bip-taproot i think 20:20:18 second paragraph of Specification in bip-taproot 20:20:32 any other questions or shall we call it? 20:20:47 Thank you all so much for your time today! My two suggestions for bip-schnorr are: 1) Specify the specific schnorr scheme used in the graph. 2) Add a footnote mentioning Strauss' algorithm or Shamir's trick as the reason for the efficiency gains of batch verification. 20:21:40 i think all of that including batch algo itself is kinda out of scope. but i'm conflicted because it's actually pretty cool to have it there :) 20:21:52 aj, don't let me stop you :) 20:21:57 kabaum: i guess i should point out that Shamir's trick is only one of the ways in which multi-EC-multiplication is faster than single-EC-multiplication 20:22:58 Oh, shit. There's more. Maybe a followup on next Q&A. 20:23:10 #endmeeting