[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [freehaven-dev] Onion Routing / Freedom use dedicated servers: why?



On Thu, Apr 12, 2001 at 09:13:48AM -0400, Michael J Freedman wrote:
> Freedom network.  They also don't seem to explicitly mention two-way
> anonymous communications -- most requests are being sent to explicit
> hostname/port or IPaddr/port, although there really isn't a reason why this
> endpoint can just be another application proxy -- like our meeting place --
> to do recipient-anonymous communication, like we suggest.

I seriously think that our meeting-place notion as we've described it is
novel. (Perhaps not novel as an idea, but I don't know of anybody else who
has actually pushed through the details.)

> (As a sidenote, I had been considering this model before I read this paper
> -- that "Bob" maybe shouldn't be named by {IP/hostname, PK}, so that the
> meeting place matches PK when a request is sent, instead, Bob can merely be
> named by the hostname/port (IP/port) of his meeting place servers, for the
> meeting place, by def, explicitly allocated one port that forms one
> endpoint of Bob's virtual circuit.  Basically, we can think of this
> endpoint as a NAT between tarzan net (including Bob) and the "wider"
> internet (obviously, tarzan net is just an overlay network (subset) of the
> latter).

So in this case, from the Tarzan layer of abstraction, "Bob" really is
the port/host that the meeting place is listening on. On a higher level of
abstraction, "Bob" might have some external PK which he uses to establish
linkability between (some of) his transactions -- but that's not something
Tarzan cares about. Is that what you're saying here? (If so, I agree.)

> But, the reason I write:  both Freedom and Onion Routing have built a
> static network with semi-permanent topology for all the mix-net type
> communication. There is an explicit separation between users (clients) and
> the (mix) servers.  Some reasons?

I think your answers are correct.

I'll explain them from my angle. The fundamental reason is that getting
high performance and high reliability out of this sort of system is hard.
It's so hard that taking some of the free variables out of the system
(reducing its complexity) is a really good idea.  One of the worst
(most free) variables is the "peers are servers" notion. It introduces
such tough questions as "so how will those dynamic nodes perform? How
can we ensure some lower bound on their reliability?  What about nodes
that appear, disappear, etc often? How do we keep a dynamic list of
nodes and make it so all the users have a recent enough list? Is there
a way to mark or tag nodes that consistently misbehave, or are even just
dead? What about ways of picking paths so that you end up with the amount
of bandwidth that you want?" Basically, every reason that Publius went
for a static list of servers too.

Clearly there is a lot of power to be gained by leveraging the resources
of the participating peers. But historically, it's been very difficult
to make the benefit outweigh the headaches raised by the above questions.

In a way, the Tarzan design was created with the intent of saying "look,
it won't work without accountability!" and then applying the various
ideas in chapter 16 of the O'Reilly book until it works. I really expect
that without some kind of (at least rudimentary) reputation system,
the problems from unreliable nodes and from how tricky it is to pick a
path that will yield 'good' performance will be enough that the system
won't reach a critical mass of users.

> 5. traffic analysis?  (more traffic over few links, harder to look at
> timing attacks and whatnot;  still, fewer links to examine)

I would guess 'no' to this one. I think any sort of streaming system
has already lost to traffic analysis attacks if the adversary is strong
enough to make this distinction matter.
 
--Roger