[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-dev] Partitioning Attacks on Prop250 (Re: Draft Proposal: Random Number Generation During Tor Voting)



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi,

I think we can mitigate this by implementing a p2p-like distribution
system for the commitment values between the directory authorities.
When an authority sends to all other authorities its commitment /
reveal value, it also sends the commitment / reveal values of the
other authorities it knows about (including their cryptographic
signatures).

If we see more than 1 commitment value from the same authority (by
verifying the signature of course) we just trigger a warning somehow
in consensus health and build a consensus without the commitment of
that authority. We just mark it as blank, or behave like that
authority didn't send a commitment value at all, to any authority or
group of authorities. At this moment the worst an evil authority could
do is throw away its right to vote for the shared randomness.

This way we cannot fail to create a consensus document at 12:00 UTC
and we also cannot have 2 simultaneously valid consensus documents
with different shared randomness value.

On 9/9/2015 11:21 PM, George Kadianakis wrote:
> Hello,
> 
> I have mixed feelings about this shared-rand-conflict mechanism.
> It indeed seems to solve a problem, but not the nasty one. And it's
> not trivial to implement.
> 
> [ Let's say we have 9 dirauths. One of them is evil. Majority needs
> 5 dirauths in this case. For a consensus to be considered valid, it
> needs 5 dirauth signatures. ]
> 
> I think the attacker we are worrying about here is the one that
> during the Commitment Phase attempts to partition the dirauths in
> two sets (4 auths in group A, and 4 auths in group B). To achieve
> that the attacker sends a vote with commitment c_1 to group A, and
> a different vote with commitment c_2 to group B.
> 
> Then in the next commitment round the attacker does the same, and
> now the two groups both think they have majority (group A has 4
> auths and the attacker, group B the same). So they both update
> their internal state accordingly.
> 
> The attacker can keep on doing the same, and when the Commitment
> Phase is over he will have persuaded both groups that they have the
> right commitment, if he keeps on lying during the Reveal Phase as
> well (by sending the right reveal value to the two groups) he could
> eventually succeed in making two different consensuses with two
> different shared random values.
> 
> Or an alternative ending scenario would be that the attacker
> chooses to not publish any consensus signatures during the last
> round of the protocol, and then neither of the groups achieves
> enough signatures to make a valid consensus. And consensus at
> 12:00UTC fails, and no shared random for today.
> 
> ---
> 
> So OK these are two reasonable attacks that shared-rand-conflict
> can address. The attacks are very noisy and detectable but they
> work. Why am I saying that shared-rand-conflcit does not mitigate
> everything?
> 
> It's because IIUC the attacker could also do the same attacks by
> following the protocol normally and then doing the partitioning
> attack during the last rounds of the _Reveal Phase_. In this case,
> the attacker partitions the dirauths into two groups by sending a
> reveal value to group A, and omitting it to group B. For this to
> work you don't need to advertise different commitments.
> 
> Again this way the result is that the attacker can get two
> consensuses with a different shared random value in each. One
> consensus will have a shared random value including the attacker's
> reveal value, and the other will have a shared random value without
> it. Alternatively, the attacker can sabotage the consensus creation
> by not publishing consensus signatures.
> 
> I feel that this attack during the Reveal Phase is harder to detect
> and more deniable.
> 
> So what can we do?
> 
> ---
> 
> An alternative protection we could do about these attacks is to
> take this to the consensus-health layer. We need to make a few
> detection scripts that will notify us if any of these attacks
> happen. We don't need shared-rand-conflict for this.
> 
> Here are some detections that need to happen:
> 
> 1) To detect attacks during the Commitment Phase, consensus-health
> should warn if it sees two votes having different commitment values
> from one auth.
> 
> 2) To detect attacks during the Reveal Phase, consensus-health
> should warn if it sees two votes where one includes a reveal value,
> and the other one doesn't. This is a sign of a partioning attack,
> or some severe misconfiguration/bug.
> 
> 3) Of course, consensus-health should go nuts if we don't manage to
> create a 12:00UTC consensus. Don't forget that an attacker that
> wants to hijack the HSDir hash ring, needs to sabotage the
> consensus like 5 days in a row to get the HSDir flag, so this
> should raise some alarms.
> 
> It would also be useful if consensus-health fetched all the votes
> *seen* by an authority, and not just the one it publishes. This way
> we can find attacks where the attacker sends different votes to
> different honest auths. We can fetch the alien votes seen by an
> authority using the URL tor/status-vote/next/<fp>.z .
> 
> ---
> 
> Finally, regardless of whether we do shared-rand-conflict or not, I
> think I like the idea of using signatures for commitments. This
> way, a commitment is a standalone proof that it comes from a
> specific authority and a specific timestamp, without requiring the
> whole vote signature. This is required to do shared-rand-conflict
> and might be useful in any case in the future.
> 
> I made a patch that implements this for prop250 at:
> 
> https://gitweb.torproject.org/user/asn/torspec.git/commit/?h=prop250-nosrdoc-sigs&id=80ed03b4ac40db62582b4af2e3c5c7702c453055
>
>  s7r told me that he likes the signature approach, and that's also
> what Nick did in his small proposal. Please let me know if you
> think this is overengineering! :)
> 
> ---
> 
> Looking forward to your thoughts!
> 
> These two things seem to be the main open attacks against prop250.
> They don't seem particularly threatening because they are all
> detectable, but we should make sure we are not forgetting
> anything.
> 
> See you around!
> 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBCAAGBQJV8KgmAAoJEIN/pSyBJlsRAd8H/jKOqzuKthlMgVl/SIvh487z
VIb75lSW5L8s9bJBRTzyNgbEmANf9BTeXxeNReMoaVQHvKjeMTZwuG7zvKbkYlb5
jBhY8T9w5V8/GDsp1SuH8YVoU0XhmOG8tF1DTjQEN/Ycr2ON1VUGmsL/aKCC1aTv
kzhaG7pTq7TZSdAH+5+s2L9YOM5Qt8fMGihy27ykC3CA8BgkWayZaK+m+HivOs2B
1Z0YZoPC83J9YqGS5NFz4t0FdGXOFFNUEAYk/NwvjIC/YKE/ci5Iiqyr7tSs/SuZ
yoSaWJCZKxzhvI1NtjngZjOLQombKNqQXGP5jGlG5kxNVojnsuiTGgJKrSaCwww=
=my1d
-----END PGP SIGNATURE-----
_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev