[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Boulder Tech report on low-resource routing attacks on Tor

Hi all,
I've been thinking about how exit and entry nodes controlled by the same adversary can easily determine if they are in the same circuit due to the predictable nature of circuit set-up (timing). Well, what about altering that? Perhaps Tor nodes should form long-lived "exploratory" circuits (a la I2P). Tor should slowly extend these, with unpredictable timing intervals, perhaps over the period of dozens of minutes, or even hours. Most of these circuits are not completely formed, and thus should not be used to route data. Since there are many of these, if one dies (they are long-lived so chances of early death are not negligible), so be it. This will allow circuit formation timing to be less predictable. As an added benefit, a Tor node may have a number of 1-hop or 2-hop circuits that it can use at any time, and by extending those, instead of forming new, full-length circuits from scratch, we can make recovery from circuit failure faster.

Thus spake Paul Syverson on Wed, 07 Mar 2007:

> From: Paul Syverson <syverson@xxxxxxxxxxxxxxxx>
> Subject: Boulder Tech report on low-resource routing attacks on Tor
> Date: Wed, 7 Mar 2007 17:16:43 -0500
> Cc: Kevin Bauer <kevin.bauer@xxxxxxxxxxxx>,
> 	Damon McCoy <Damon.Mccoy@xxxxxxxxxxxx>,
> 	Dirk Grunwald <grunwald@xxxxxxxxxxxxxxx>,
> 	"Douglas C. Sicker" <Douglas.Sicker@xxxxxxxxxxxx>
> To: or-talk@xxxxxxxxxxxxx
> Reply-To: or-talk@xxxxxxxxxxxxx
> The following are some comments on the Univ. Colorado at Boulder tech
> report "Low-Resource Routing Attacks Against Anonymous Systems" that
> has been getting lots of press and other web attention lately and been
> somewhat discussed on this list.  It is only today that I have managed
> to find time to sit down and read the paper.
> The nutshell for people that don't want to read the details below:
> A good paper. It does _not_ show Tor to be broken. (Nor did it ever
> claim to. I only state that because of some of what has appeared in
> the blagnpress, which to their credit, the authors tried to curtail.)
> It is a nice contribution, especially in showing the limitations of
> the current approach to entry guard selection. Overstates its novelty
> over prior work, which is really unnecessary because it makes valuable
> contributions of its own (and which is more or less my fault not theirs,
> cf. below).
> More details:
> This is a nice piece of work. Its greatest contribution is in
> directing attacks on entry guards. In the theory and simulation work
> in which such ideas were introduced by Wright et al. they were
> introduced (as "helper nodes") to reduce vulnerability.  As a recent
> addition to Tor, the nature of defense they provide but also the
> possible risks from how they are used in actual implementation and
> deployment needed to be explored. It was understood from the start
> that there is something of a tradeoff in introducing them. It was
> realized that profiling without entry guards was in practice trivial
> enough that the additional risk of adding entry guards and thus
> simplifying and enhancing profiling for anyone who unfortunately had
> an adversary guard node was clearly worth it. I don't think this paper
> changes that. However, by attacking the guard selection process
> itself, the research forces us to examine it more closely.
> What they did was apply techniques that Lasse and I developed in
> "Locating Hidden Services" to ordinary client circuits. Though we had
> said this would be straightforward to do, we didn't actually do it.
> Because we were focused on the deployed Tor network we could not
> pursue this sort of attack there. We were also focused primarily on
> what could be accomplished with a single hostile node. This limits to
> cases of either a hostile website (as in Murdoch and Danezis and as
> mentioned on p. 10 of this tech report) or a hostile client and a
> hidden service, which is what we reported on.
> Deploying a Tor network on PlanetLab and using synthetically generated
> data removes some of the "in the wild" reality from the results.  But,
> by accepting this limitation, it allowed them to obtain data at all
> about Tor circuits for ordinary use (not hidden services). Much in the
> practicality spirit of onion routing. 
> The experimental networks were more than an order of magnituded
> smaller than the current deployed Tor network. One cannot be sure
> something will scale until actually trying it, but in this case there
> is no reason to doubt it does scale. Still if we take the 9 percent
> figure given by the authors as an arbitrary line at which attacks
> become significant, that is still almost a hundred nodes in the
> current network. At about twice the entire size of the experimental
> networks that were set up this starts to be a bit more than
> low-resource.  Still one could do quite a bit with less than 9
> percent. Also, as a counter to my own point, see "On the
> countermeasures" below.
> On prior work:
> Before I start noting all the things that the authors didn't properly
> cite, I should observer that they first contacted me about their work
> way back in last October and have cc'd me on correspondence with
> Roger, who did read their work and respond to them. If these are
> indeed omissions and oversights in their paper, then it is my fault
> because they gave me plenty of opportunity to comment before it hit
> the interblag.  I just didn't squeeze in the time before now.
> I already mentioned that the basic techniques are similar to what was
> in my paper with Lasse Øverlier, "Locating Hidden Servers".  The
> authors say that they think theirs is "the first approach capable of
> [compromising anonymity] before the client starts to transmit any
> payload data". I believe that the code we ran would be entirely able
> to do this. We mentioned it only briefly in passing in our paper, and
> we didn't actually do it. The authors of this report did.
> They also say they have introduced the idea of nodes falsely reporting
> bandwidth and uptime. As they note this is central to the way their
> basic attack works. As they do not note, this was explicitly used by
> Lasse and me in our attacks. I quote from our paper, "Alice's server
> node will report a false higher uptime and the maximum network
> bandwidth to the directory server in order for other nodes to trust it
> for their circuits. This is still possible as there is (yet) no method
> for keeping reliable track of uptime at the different servers."
> They also introduce the idea of selective path disruption to speed up
> attacks by dropping circuits, because that will cause more circuits to
> be built. This was also part of our attacks albeit since we controlled
> the client connecting to the hidden service, it could be done there.
> The statement that the algorithm for choosing entry guards was
> "implemented to protect the first hop of a circuit by using what are
> believed to be more reliable and trustworthy nodes" is false.  Using
> more reliable nodes was seen as sensible because it should minimize
> the number of entry guards a client uses, which is the whole
> point. Nobody thought that this made them more trustworthy. In fact
> the general threat of adversaries running reliable nodes (in both
> onion routing and mixnets) to attract more traffic is well recognized.
> On the threat model. While the design document does use the c^2/n^2
> basic result from "Towards an Analysis of Onion Routing Security"
> as a starting point. This was not thought to be accurate once we
> had substantial deployment and was acknowledged as such.
> Cf., "Challenges in deploying low-latency anonymity"
> http://www.onion-router.net/Publications.html#challenges
> On the countermeasures:
> Rather than discuss all of them in detail in an already overly long
> post, I'll just note that I think they may help against an adversary
> that has only several hundred to a few thousand dollars in resources.
> The authors note that they are considering just that case, and it is
> certainly worthwhile to see what it takes to mount an attack and what
> works against a low resource adversary. That was also part of our
> motivation to show what could be done with a single bad node.  But, an
> adversary that has a few tens of thousands of dollars can simply run
> many reliable high bandwidth nodes and thus mount the attack
> invulnerable to any countermeasure against lying. Michael Gersten
> noted the threat of this attack in a separate context in a post to
> this list yesterday (March 6). And it has long been recognized as a
> potential threat to Tor in general. I have begun to look at
> countermeasures that should work unless the adversary owns major hunks
> of the network, e.g., social network based, but will not get into that
> further here.
> More than you wanted to read. Hope it was useful anyway.
> aloha,
> Paul

Eugene Y. Vasserman