[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-dev] Improving Private Browsing Mode/Tor Browser



Thus spake Robert Ransom (rransom.8774@xxxxxxxxx):

> On Thu, 23 Jun 2011 10:10:35 -0700
> Mike Perry <mikeperry@xxxxxxxxxx> wrote:
> 
> > Thus spake Georg Koppen (g.koppen@xxxxxxxxx):
> > 
> > > > If you maintain two long sessions within the same Tor Browser Bundle
> > > > instance, you're screwed -- not because the exit nodes might be
> > > > watching you, but because the web sites' logs can be correlated, and
> > > > the *sequence* of exit nodes that your Tor client chose is very likely
> > > > to be unique.
> > 
> > I'm actually not sure I get what Robert meant by this statement. In
> > the absence of linked identifiers, the sequence of exit nodes should
> > not be visible to the adversary. It may be unique, but what allows the
> > adversary to link it to actually track the user? Reducing the
> > linkability that allows the adversary to track this sequence is what
> > the blog post is about...
> 
> By session, I meant a sequence of browsing actions that one web site
> can link.  (For example, a session in which the user is authenticated
> to a web application.)  If the user performs two or more distinct
> sessions within the same TBB instance, the browsing actions within
> those sessions will use very similar sequences of exit nodes.
> 
> The issue is that two different sites can use the sequences of exit
> nodes to link a session on one site with a concurrent session on
> another.

Woah, we're in the hinterlands, tread carefully :).

When performed by websites, this attack assumes a certain duration of
concurrent use that is sufficient to disambiguate the entire user
population. It also assumes exact concurrent use, or the error starts
to go up at an unknown and population-size dependent rate.

However, when performed by the exits, this linkability is a real
concern. Let's think about that. That sounds more like our
responsibility than the browser makers. Now I think I see what Georg
was getting at. We didn't mention this because the blog post was
directed towards the browser makers.

I've actually been pondering the exit side of this attack for years,
but we've never come to a good conclusion about what solution to
deploy for various reasons. There are impasses in every direction.

Observe:

Does this mean we want a more automatic version of Proposal 171,
something like Robert Hogan proposed? Something per-IP or per
top-level domain name? That is what I've historically argued for, but
I keep getting told it will consume too many circuits and help
bittorrent users (though we have recently discovered how to throttle
those motherfuckers, so perhaps we should just do that).

Or does this mean that Torbutton should be handing different SOCKS
usernames+passwords down to the SOCKS proxy per tab? This latter piece
is very hard to do, it turns out. SOCKS usernames and passwords are
not supported by the Firefox APIs. But that is the easy part, now that
we have control over the source.

The harder problem is the Foxyproxy API problem.. The APIs to do this
type of proxy tracking don't exist, and they don't exist because of
Firefox architectural problems.. But maybe there's a bloody hack to
the source that we can do because we just don't give a damn about
massively violating their architecture to get exactly what we want in
the most expedient way. Maybe.

I still think Tor should just do this, though. Every app should be
made unlinkable by a simple policy there by default, and we should
just rate limit it if it gets to intense (similar to NEWNYM rate
limiting).


-- 
Mike Perry
Mad Computer Scientist
fscked.org evil labs

Attachment: pgpRicPNb8Kw8.pgp
Description: PGP signature

_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev