[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[freehaven-dev] Onion Routing / Freedom use dedicated servers: why?
Last night, I read through the Onion Routing paper again (find publications
and information at http://www.onion-router.net/). They actually do many
things of a similar nature to our proposed Tarzan design, with some key
differences.
They use both application proxies and onion proxies that can handle quite a
number of protocols, as well as raw socket requests, by massaging requests
and building XML. Basically, users connect or run proxies themselves,
which is in charge of building onions, and sending through onion proxies.
The major difference, however, is that the onion proxies -- who do the
actually mix anon transport -- are static and permanent, much like the
Freedom network. They also don't seem to explicitly mention two-way
anonymous communications -- most requests are being sent to explicit
hostname/port or IPaddr/port, although there really isn't a reason why this
endpoint can just be another application proxy -- like our meeting place --
to do recipient-anonymous communication, like we suggest.
(As a sidenote, I had been considering this model before I read this paper
-- that "Bob" maybe shouldn't be named by {IP/hostname, PK}, so that the
meeting place matches PK when a request is sent, instead, Bob can merely be
named by the hostname/port (IP/port) of his meeting place servers, for the
meeting place, by def, explicitly allocated one port that forms one
endpoint of Bob's virtual circuit. Basically, we can think of this
endpoint as a NAT between tarzan net (including Bob) and the "wider"
internet (obviously, tarzan net is just an overlay network (subset) of the
latter).
But, the reason I write: both Freedom and Onion Routing have built a
static network with semi-permanent topology for all the mix-net type
communication. There is an explicit separation between users (clients) and
the (mix) servers. Some reasons?
Major reasons?
1. efficiency: high band-width links (but more traffic over fewer links?)
2. reliability: better guarantee of uptime & service (?)
Others?
3. better abuse control (I doubt onion might have worried so much about)
4. security? (if they choose semi-trusted operators, adversaries less
likely to "run" nodes. Still, if a few are compromised, security in much
worse case. And, as all Freedom AIP (servers) run same O.S. (Redhat 6.x),
one exploit could crack all servers.)
5. traffic analysis? (more traffic over few links, harder to look at
timing attacks and whatnot; still, fewer links to examine)
Others seem to "propose" a similar model as tarzan. Why don't the major
deployed systems? P2P wasn't in vogue? They wanted better efficiency and
reliability?
But the room of my question: can anyone think of any practical,
engineering problems that make the Tarzan design (clients ~ servers)
impractical or more insurmountable? Definately a thing to muse about
before we might start protocol and system building in earnest.
Thanks,
--mike
P.S. On onion-routing page: The Onion Router Prototype Network is
Off-Line as of January 28 , 2000. Evaluation of the proof-of-concept
prototype has concluded. A wider test of the second generation system is
pending. Anybody know if there are actually building v2 or if this
project is fairly ended? Most of the people seem to be listed as gone...
-----
"Not all those who wander are lost." mfreed@mit.edu