[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Router twins are obsolete?



On Sat, Aug 23, 2003 at 05:50:26AM -0400, Roger Dingledine wrote:
> "Router twins" are two or more onion routers that share the same public
> key (and thus private key). Thus if an extend cell asks for one and it's
> down, the router can just choose the other instead. This provides some
> level of redundancy/availability.
> 
> Indeed, Paul pointed out recently that the new ephemeral-DH session key
> means that router twins aren't as much of a security risk as before:
> sharing the RSA key just means that any of them can 'answer the phone',
> as it were, and then proceed to negotiate a session key that the other
> twins don't know. That is, if routers A1 and A2 have the same private
> key, and the adversary compromises A1, he still can't eavesdrop on the
> conversation bob has with A2.
> 
> Here are some of our reasons for wanting router twins, with reasoning
> why they're no longer relevant:
> 
> A) Having long-term reliable nodes is critical for reply onions, since
> the nodes in a reply path have to be there weeks or months later. Having
> redundancy at the router level allows a reply path to survive despite
> node failure.
> 
> However, we're not planning to do reply onions anymore, because rendezvous
> points are more flexible and more robust.
> 

However, rendezvous points have not been completely spec'ed, implemented
or analyzed. So we only have educated guesses so far about what is preferable.
Also, reply onions might be used in a complementary or orthogonal way,
we just haven't examined this area enough to say one way or another.

> B) If Alice chooses a path and a node in it is down, she needs to choose
> a whole new path (that is, build a whole new onion). This endangers her
> anonymity, as per the recent papers by Wright et al.
> 
> However, now if an extend fails, it sends back a truncated, and Alice
> can try another extend from that point in the path. This seems more
> safe (well, not as unsafe). And besides, it's not like we were actually
> allowing Alice's circuit to survive node failure; we were just allowing
> circuit-building to tolerate node failure, sometimes.
> 

Don't want to be distracted too much by this. Since it is the
entry/exit points that matter as far as analysis we've done has shown
(and possibly those adjacent to them depending on the configuration),
the attendant assumptions of those attacks does not show them an
imminent practical threat to OR type systems.

> C) We can do load balancing between twins, based on how full each
> one is at this moment.
> 
> However, load balancing seems like a whole lot of complexity if we want
> to keep much anonymity. We've been putting off spec'ing this, because
> it's really hard. It's not going to get any easier.
> 

On the other hand we've also had some recent looks at doing load balancing
for anonymity not just performance. Not based on router twins, but we
haven't even considered how router twins play in there. 

> D) Users don't have to rely as much on up-to-date network snapshots
> (directories) to choose a working path, since most supernodes
> (conglomerates of twins) will have at least one representative present
> all the time.
> 
> This is still a nice idea. However, it doesn't seem tremendously hard
> to crank up the directory refresh rate for users so they have a pretty
> accurate picture of the network. The directory servers already keep
> track of who's up right now and publish this in the directory. And if
> most of the nodes aren't part of a supernode, then this reason doesn't
> apply as much. So if this is the only reason left to do router twins,
> maybe it's not good enough.
> 

But on the other hand, this cranked up rate also gives more accurate
information to those who attack parts of the network to try to gain
traffic information. Not sure how much that matters in practice.
Say it again: we need more analysis.

> Some other reasons why router twins are bad:
> 
> E) Surprising jurisdiction changes. Imagine Alice choosing an exit
> node in Bulgaria and then finding that it has a twin run by Alice's
> employer. Choosing between router twins is done entirely at the previous
> hop, after all.
> 

Yeah, but this is easily handled by listing the members of a twin in
the directory servers. I could see some reasons for not doing that.
Don't know whether the pros or cons are pre-eminent, but this threat
exists anyway, e.g., the Bulgarian site is owned by Alice's employer
(or both by the same parent company) or her employer owns the hardware
that the Bulgarian site leases, etc.  Perhaps more importantly,
attempts to shut down a particular onion router with an established
reputation by the appropriate jurisdictional legislation or rubber
hoses get harder when that only manages to close one piece of a
virtual node rather than the whole thing.  In fact, a moderate amount
of twinning might make the need to check which nodes are up or down
almost nonexistent. Hmmm.

> F) Path selection is harder. We go through a complex dance right now
> to make sure twins aren't adjacent to each other in the path. To keep
> it sane we choose the entire path before we start building it. We're
> going to have to change to a more dynamic system as we start taking into
> account exit policies, having nodes down (or supernodes down, even if we
> do router twins), etc.  I bet a simpler path selection algorithm would
> be easier to analyze and easier to describe.
> 

I just don't buy this. Path selection shouldn't at all be affected, As
I just noted, you might need to almost never worry about a node being
down but that's antecedent to, not part of path selection. Determining
when a (virtual) link is down or what the network topology is might be
more complicated, but that's antecedant to path selection.

> G) Router twins threaten anonymity. Having multiple nodes around the
> world, any of which can leak the private key, is bad. Remember "There's no
> security without physical security" and "Two can keep a secret if one's
> dead". Either all the twins are run by just me, in which case it's hard
> for me to physically secure them all (or they're in the same room, and
> not improving availability much), or they're run by different people,
> in which case the private key isn't as private. If the adversary gets
> the supernode's private key, he may be able to redirect you to him by
> DoSing the remaining twins, etc. (or if we do load balancing, simply
> by advertising a low load). Or if he happens to own the previous hop,
> he can just simulate the supernode right there.
> 

We need more analysis, we need more analysis, we need more analysis.
Maybe we will find that restricted routes with more robust virtual
nodes are better against more significant and real threats than
cliques with dynamic nodes for the same percentage of hostile nodes
with the same capabilities.  I just don't think we can say one way or
the other. There's all kinds of attacks that may be possible
or reduced.

Bottom line: I see no reason to kill router twins based on the sort of
anecdotal and high level arguments we're having here. That said,
I also don't see much reason to develop them further. If it costs little
or nothing to keep the possibility in the code, I say keep it because
there may be a good reason in the future. If we are at a point where
a choice must be made, because we are going to pursue a design path
that is incompatible with twins, then we should have better arguments
one way or the other about which way to go. If there is a maintenance
overhead cost issue, that's another matter. Again, we should then
be pursuing more immediately relevant paths and be prepared to bring
twins up to date should compelling need arise.

aloha,
Paul