On 19/10/2018 14:01, George Kadianakis wrote: > Michael Rogers <michael@xxxxxxxxxxxxxxxx> writes: >> A given user's temporary hidden service addresses would all be related >> to each other in the sense of being derived from the same root Ed25519 >> key pair. If I understand right, the security proof for the key blinding >> scheme says the blinded keys are unlinkable from the point of view of >> someone who doesn't know the root public key (and obviously that's a >> property the original use of key blinding requires). I don't think the >> proof says whether the keys are unlinkable from the point of view of >> someone who does know the root public key, but doesn't know the blinding >> factors (which would apply to the link-reading adversary in this case, >> and also to each contact who received a link). It seem like common sense >> that you can't use the root key (and one blinding factor, in the case of >> a contact) to find or distinguish other blinded keys without knowing the >> corresponding blinding factors. But what seems like common sense to me >> doesn't count for much in crypto... >> > > Hm, where did you get this about the security proof? The only security > proof I know of is https://www-users.cs.umn.edu/~hoppernj/basic-proof.pdf and I don't see > that assumption anywhere in there, but it's also been a long while since > I read it. I may have misunderstood the paper, but I was talking about the unlinkability property defined in section 4.1. If I understand right, the proof says that descriptors created with a given identity key are unlinkable to each other, in the sense that an adversary who's allowed to query for descriptors created with the identity key can't tell whether one of the descriptors has been replaced with one created with a different identity key. It seems to follow that the blinded keys used to sign the descriptors* are unlinkable, in the sense that an adversary who's allowed to query for blinded keys derived from the identity key can't tell whether one of the blinded keys has been replaced with one derived from a different identity key - otherwise the adversary could use that ability to distinguish the corresponding descriptors. What I was trying to say before is that although I don't understand the proof in section 5.1 of the paper, I *think* it's based on an adversary who only sees the descriptors and doesn't also know the identity public key. This is totally reasonable for the original setting, where we're not aiming to provide unlinkability from the perspective of someone who knows the identity public key. But it becomes problematic in this new setting we're discussing, where the adversary is assumed to know the identity public key and we still want the blinded keys to be unlinkable. * OK, strictly speaking the blinded keys aren't used to sign the descriptors directly, they're used to certify descriptor-signing keys - but the paper argues that the distinction doesn't affect the proof. > I think in general you are OK here. An informal argument: according to > rend-spec-v3.txt appendix A.2 the key derivation is as follows: > > derived private key: a' = h a (mod l) > derived public key: A' = h A = (h a) B > > In your case, the attacker does not know 'h' (the blinding factor), > whereas in the case of onion service the attacker does not know 'a' or > 'a*B' (the private/public key). In both cases, the attacker is missing > knowledge of a secret scalar, so it does not seem to make a difference > which scalar the attacker does not know. > > Of course, the above is super informal, and I'm not a cryptographer, > yada yada. I agree it seems like it should be safe. My point is really just that we seem to have gone beyond what's covered by the proof, which tends to make me think I should prefer a solution that I understand a bit better. (At the risk of wasting your time though, I just want to suggest an interesting parallel. Imagine we're just dealing with a single ordinary key pair, no blinding involved. The public key X = xB, where x is the private key and B is the base point. Now obviously we rely on this property: 1. Nobody can find x given X and B But we don't usually require that: 2. Nobody can tell whether public keys X and Y share the same base point without knowing x, y, or the base point 3. Nobody can tell whether X has base point B without knowing x We don't usually care about these properties because the base point is public knowledge. But in the key blinding setting, the base point is replaced with the identity public key. As far as I can see, the proof in the paper covers property 2 but not property 3. I'm certainly not saying that I know whether property 3 is true - I just want to point out that it seems to be distinct from properties 1 and 2.) >> We're testing a prototype of the UX at the moment. >> >> Bringing up the hidden service tends to take around 30 seconds, which is >> a long time if you make the user sit there and watch a progress wheel, >> but not too bad if you let them go away and do other things until a >> notification tells them it's done. >> >> Of course that's the happy path, where the contact's online and has >> already opened the user's link. If the contact sent their link and then >> went offline, the user has to wait for them to come back online. So we >> keep a list of pending contact requests and show the status for each >> one. After some time, perhaps 7 days, we stop trying to connect and mark >> the contact request as failed. >> > > Yeah, I don't think a progress wheel is what you want here. You probably > want a greyed out contact saying "Contact pending..." like in the case > of adding a contact in Ricochet. Right, what we have at the moment is essentially that, but the pending contacts are in a separate list. Moving them into the main contact list might be clearer though - thanks for the idea. Cheers, Michael
Attachment:
0x11044FD19FC527CC.asc
Description: application/pgp-keys
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ tor-dev mailing list tor-dev@xxxxxxxxxxxxxxxxxxxx https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev