[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Replay attacks (was Re: Thoughts on MixMinion archives)



On Tue, Apr 23, 2002 at 01:36:32AM -0700, Len Sassaman wrote:
> * replay attack protection (id log vs. date stamp)
> 
> One thing we discussed in the hallways and at the BOF was how Mixmaster
> does replay attack prevention. Lance, when originally designing Mixmaster,
> opted not to put a time stamp in the message because of the very reasons
> that have been discussed on this list already. What he did instead was put
> a unique ID under each layer of the message encryption, and the remailer
> stores that ID in a log file with a timestamp. After a good long period of
> time, the log is cycled and the entry drops off. (No, this is not a
> perfect solution, but if the duration could be set by the remop, it could
> provide replay protection for years if she had enough disk space for the
> log file to grow).
[snip]
> I lean toward going with the way Mixmaster does it, though I could be
> convinced otherwise.

Just a brief response to this (I'm way too busy to be working on mixminion
tonight, but uh, yeah.)

We can't afford to let even a single message be replayed. It isn't
just that an adversary can flood a mix with the same message and watch
where the flood goes. The problem is that if the adversary watches the
input and output batches of a mix, and then comes back a month later
(after the replay cache has expired) and replays a message, then *the
message's decryption will be exactly the same*.

Bye-bye forward anonymity.

So here's a compromise. A mix must keep hashes of all messages it's
processed since the last time it rotated its key. Mixes should choose
key rotation frequency based on security goals and on how many hashes
they want to be remembering.

This doesn't totally make the fencepost timing problem go away --
near the time of a key rotation, the anonymity set of messages will be
divided into those who knew about the key rotation and used the new key,
and those who didn't.

But actually, this problem of public information is a much broader
and subtler one -- it happens with any situation where there's public
information and that information can be updated and people take actions
based on their current version of that information.

For example, if the reputation servers (ping servers, participant servers,
whatever we want to call them) indicate that a given mix that used to
be #1 in the charts has become unreliable, then an adversary can grab a
suspected message and sit on it until most new senders won't be using that
mix. Then he releases the message and looks for a message going to it.

I'll grant you that lots of these attacks seem to involve only
one adversary doing the attack at a time, so we can confuse him by
intentionally sending dummy traffic to cover these situations. But that
doesn't entirely solve the problem.

Other opinions?
--Roger