[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freehaven-dev] Re: Freehaven CVS Commit



On Sun, Mar 18, 2001 at 04:23:29PM -0500, dmolnar@belegost.mit.edu wrote:
> -----------------------------------------------------------------
> dmolnar   Sun Mar 18 16:23:29 EST 2001
> 
> Update of /home/freehaven/cvsroot/doc/mix-acc
> In directory belegost.mit.edu:/extra/home/dmolnar/doc/mix-acc
> 
> Modified Files:
> 	mix-acc.ps mix-acc.tex 
> Log Message:
> 
> *  Added "three approaches to MIX-net reliability"
> *  Finished 1st draft of the related work section
> *  Moved MIX reliability model to section 3, placed notes towards
>    general MIX model for section 3. 
> 
> Comments I want - how to say, exactly, what the difference is between the
> notion of "reliability" we have and the notion of "robustness" considered in
> previous work. 
 
"I don't know." But I'll talk about it for a while, to get other people
to think about it and disagree with me.

In the notion of robustness as I understand it from previous work,
people focus on a $k$ of $n$ scheme: as long as $k$ of the mix nodes
are behaving correctly, then your message will get delivered.

There are two differences that I see between this context and ours:

1) Our mix relies in a sequential set of nodes for the 'path'. The mixes
you describe above probably use multiple paths and redundant processing
and stuff like that to handle loss of any given mix. (Correct me if
I'm wrong, but that's generally considered infeasible by the community
who actually builds and uses these things, right?) Whereas in our case,
if you hit "the right mix", then the message will fail.

2) We think about the time dimension too. This is a key point. We're
not concerned so much about correct behavior for a given message as we
are about overall percentage of successful messages[*], not just at a
given point in time, but "going forward" over the lifetime of the mix
as it tends (we hope) toward equilibrium.

The 'robustness' which (eg) Castro and Liskov's byzantine system gets
is neat because it can handle some k failed (or malicious) nodes -- no
matter how they fail. But they skip over the problem of how to identify
nodes that are "subtly" failed. Their goal is to maintain service in the
face of a network which has temporary node outages or compromises. With
their proactive recovery mechanisms, nodes which were failed but now want
to 'rejoin the system' can be rebuilt and used again. They really look
at it as a distributed computation where you want to "get the answer"
even if people are trying to mess you up. So this notion of robustness
is very general.

Whereas our goal is to maintain service in the face of a network which
provides varying levels of performance. A key part of this is identifying
nodes with poor (or no) performance and de-emphasizing the use of those
nodes in our path choices. It's not enough to say "that node is down
right now, don't use it for this message" -- we need to add "and don't
use it down the road either". And we need a protocol and infrastructure
for making it work.

--Roger

[*] Yes, I'm being imprecise. Using this as a metric for how cool our
mixnet is doesn't work, because an adversary can drive our coolness
arbitrarily low by sending messages which he knows will fail. But you
get the point.