[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

[freehaven-cvs] dynamic-alpha and tau mixing sections



Update of /home/freehaven/cvsroot/doc/alpha-mixing
In directory moria:/tmp/cvs-serv23932

Modified Files:
	alpha-mixing.tex 
Log Message:
dynamic-alpha and tau mixing sections

Index: alpha-mixing.tex
===================================================================
RCS file: /home/freehaven/cvsroot/doc/alpha-mixing/alpha-mixing.tex,v
retrieving revision 1.26
retrieving revision 1.27
diff -u -d -r1.26 -r1.27
--- alpha-mixing.tex	11 Mar 2006 02:34:19 -0000	1.26
+++ alpha-mixing.tex	11 Mar 2006 02:41:00 -0000	1.27
@@ -584,9 +584,8 @@
 0$.  For more sensitive traffic, we might initially try to follow
 networks such as Mixminion and Mixmaster. But how can we do that? 
 These use a dynamic batching strategy in which messages are chosen
-for the current batch randomly by the mix from a collective pool
+for the current batch randomly by the mix from a collective pool,
 while alpha mixing is based on individual choices made by the sender.
-
 We now turn to various generalizations on the basic deterministic-alpha
 mix design, including ways to combine these features.
 
@@ -612,7 +611,7 @@
 
 We could include timestamps along with the $\alpha_0$ that each mix
 receives with a message and require that the message be dropped if it
-arrives more than some delta from the timestamp.  This would make
+arrives more than some delta from the timestamp. This would make
 timed alpha mixes essentially equivalent to stop-and-go mixes, which
 might prove useful against timing correlations by such an adversary.
 For example, Alice might send one hundred messages to Bob that are
@@ -627,11 +626,11 @@
 necessarily including $0$). This would (1) prevent such an attack if
 the adversary cannot predict her distribution, (2) still have as much
 predictability on delivery time as stop-and-go mixes, and (3) unlike
-
 stop-and-go, still allow eventual delivery of all messages not
 completely blocked. We are not primarily focused in this paper
 on end-to-end timing attacks, and we will say no more about them.
 
+
 \subsection{Variations on deterministic-alpha mixing}
 
 In the basic threshold deterministic-alpha mix, if there are
@@ -655,47 +654,94 @@
 The simplest solution is to simply mix all messages that emerge at
 once. This will prevent an adversary from learning the sensitivity of
 messages by observing their alpha levels from their positions in the
-batch. This together with minimal dummy scheme presented in
+batch. This together with the minimal dummy scheme presented in
 Section~\ref{sec:dummies} would substantially reduce the effect of
 blending: an adversary emptying the mix of all messages up to the
 highest reasonably expected level, trickling in a message then
 flooding with $\alpha_0 = 0$ messages repeatedly to learn the
-sensitivity of that message.
+sensitivity of that message and its next destination.
 
 We could also require that the firing of the mix be
 threshold-and-timed, which would prevent the adversary from triggering
 an alpha-stack dump by only allowing messages of one alpha level to
-emerge in one time interval.
+emerge in one time interval. It is unclear what the local advantage is
+of this vs.\ the above multilevel-batching threshold mix. In addition,
+having timed-and-threshold batching would preclude the predictability
+advantages of timed mixes while the multilevel-batching approach could
+potentially offer faster performance. The primary risk of not having
+timing limitations on mix firing is the end-to-end effects that the
+adversary could induce by flooding, which would not be countered by
+our dummy scheme. However, that assumes a powerful adversary that can
+flood and watch the entire network.  The nice thing about alpha mixing
+is that we can still have both good realtime properties and threshold
+protections together.
 
+There are various ways to have realtime and threshold properties together
+in one mix design. We will note two of them.
 
+\subsection{Dynamic-$\alpha$ mixing}
 
-it may be possible to conduct active attacks
-on a message in an alpha mixnet by
+In this design, alphas are assigned to messages as they have been
+all along, except instead
+of them deterministically decreasing by one after each mix firing, there is
+a probabilistic function $f$.\\
+$\alpha_{M,i+1} = f(\alpha_{M,i}, \mathit{Pool}(\alpha_{M,i}))$
 
-Both timed deterministic-alpha mixes and stop-and-go mixes have the
-vulnerability of all purely timed mixes: they provide no anonymity
-if others are not sending through them at the same time. This can
-be addressed in part by padding, and we have introduced a relatively
-lightweight padding scheme above. But, 
+$\mathit{Pool}(\alpha_{M,i}) = | \{M' : 1 \leq \alpha_{M',i-1} \leq
+\alpha_{M',i-1} \} | $
 
+We believe that $f$ would typically be monotonically nonincreasing.
+The sender gives $f_M$ to a mix along with $\alpha_{M,0}$. We would
+expect that there be some small number of easy-to-compute $f$s that
+can be chosen. The
+idea is that alphas decrease but only as a function of the
+current alpha level of the message and how many messages
+are in the pool below it. We have also limited the input of
+$f$ to messages that were given an $\alpha_0 > 0$, although this
+is not necessary. This effectively puts each message in a dynamic
+pool, which could also be timed.
 
-Here's a design. Assign alphas as we have been thinking, except instead
-of them deterministically decreasing one after each firing, there is
-a probabilistic function.\\
-$x_{\alpha,i+1} = f(x_{\alpha,i}, \mathit{Pool}(x_{\alpha,i}))$ where
-$\mathit{Pool}(x_{\alpha,i}) = | \{x : 1 \leq x_{\alpha ',i-1} \leq
-x_{\alpha,i-1} \} | $
+\subsection{$\tau$ mixing}
 
-My assumption is that f is monotonically nonincreasing, but maybe not.
-This should get us something like generalized mixes only a bit more
-general because, like the alphas, the $f$ is not set by the mix but by
-the individuals. (This is maybe where you were trying to go when
-observing that you couldn't add timestamps in the generalized mix
-paper?) Idea is that alphas decrease but only as a function of where
-they are and how many messages are in the pool below them (and were
-above 0 at some point, since I figured those messages shouldn't
-count). This gives us a dynamic alpha pool (either timed or threshold
-depending).
+We have been describing alpha all along as a level which determines a
+batch of messages that a given message will be sent with, after (or
+possibly also together with) the messages in the alpha levels that are
+below it in the stack. This lends itself naturally to the batching
+concept familiar in the mix literature. Threshold batching would seem
+to imply unpredictable delays since we don't know how long it will
+take for a threshold number of messages to accumulate at $\alpha = 0$.
+Timed mixing on the other hand will allow a predictable delay, but
+this is both a lower and an upper bound on the latency of delivery.
+Threshold batching would seem to allow messages to potentially be
+processed as quickly as they arrive, provided the batch size does not
+get in the way. 
+
+This is the idea behind tau mixing: a message $M$ arrives at a mix
+with an associated threshold $\tau_{M,0}$ of how many other messages
+must be sent by the mix between the arrival and sending of $M$.
+Multiple messages that have the same tau can be sent together after
+mixing, e.g., three messages that arrive with $\tau_0 = 2$ are sent
+together. Messages that are to be sent as quickly as possible are
+assigned $\tau_{M,0} = 0$.  This can provide realtime properties
+limited only by the processing speed of the network components. For
+example, if a message with $\tau_0 = 0$ arrives at a mix containing
+messages with current $\tau = 1$, $\tau = 2$, and $\tau = 3$, the
+latter three should be mixed and sent together after sending the
+former. (We assume messages with $\tau_0 = 0$ should always be sent as
+quickly as they arrive without the delay associated with mixing.)
+Messages that are more sensitive should be assigned a $\sum
+\tau^{(i)}_{M,0}$ from a private distribution on a range that
+increases with sensitivity. Many of the same features of alpha mixing
+apply, e.g., the dummy strategy, how to distribute $\sum \tau$ across
+the mixes in the path, etc. If taus are purely threshold values, then
+an adversary that is powerful enough to perform a sustained flush of
+the entire network will be able to conduct end-to-end timing
+correlations on more sensitive messages (assuming we stick to a purely
+internally routed dummy scheme). However, both a threshold and a
+random minimum delay at each mix can be given as security parameter.
+This will prevent effective flushing unless the adversary can also
+perform sustained blocking of all inputs to the mixnet. Even then
+the attack will be substantially slowed.
 
 \section{Conclusion}
 

***********************************************************************
To unsubscribe, send an e-mail to majordomo@xxxxxxxx with
unsubscribe freehaven-cvs       in the body. http://freehaven.net/