[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-talk] Spoofing a browser profile to prevent fingerprinting



Mirimir writes:

> > For instance, suppose that you went to site A at 16:00 one day and to
> > site B at 20:00 the following day.  If site A and site B (or people
> > spying on them) can realize that you're actually the same person through
> > browser fingerprinting methods, then if someone has an approximate
> > observation that you were using Tor at both of those times, it becomes
> > much more likely that you are the person in question who was using the
> > two sites.  Whereas if the observations are taken separately (without
> > knowing whether the site A user and the site B user are the same person
> > or not), they could have less confirmatory power.
> 
> That's getting perilously close to traffic confirmation, isn't it?

Yes!  But other kinds of fingerprinting could drastically reduce the
fine-grainedness of the observations that you need in order to do a
traffic confirmation-style attack.  Instead of sub-second packet timings
or complete circuit flow volumes or whatever, you might be able to say
something like "what approximate times of day on which days was this
person using Tor at all"?

It might be interesting to think about this in terms of a paper like
"Users Get Routed" -- trying to expand understanding of the risk of
attacks, as the authors of that paper say, of "user behavior" when we
include (1) browser fingerprinting risks in relation to user behavior,
and (2) relatively limited adversaries, including some who didn't have
deanonymizing Tor users as a primary goal.

The Harvard bomb threat case, as I understand it, shows a specific
example of deanonymizing a Tor user by an adversary (Harvard's network
administrators) who did retain some data partly in order to reduce network
users' anonymity, but who didn't seem to have had a prior goal of breaking
Tor anonymity in particular.  And the data that they apparently retained
was more course-grained than what would be ideal for traffic confirmation
attacks in general.

I don't mean that the Harvard case involved browser fingerprinting
at all.  I guess I just mean that browser fingerprinting's relevance
to Tor anonymity might include increasing the information available to
limited network adversaries.

-- 
Seth Schoen  <schoen@xxxxxxx>
Senior Staff Technologist                       https://www.eff.org/
Electronic Frontier Foundation                  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109       +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@xxxxxxxxxxxxxxxxxxxx
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk