[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-talk] Tor Weekly News — October 23th, 2013
On 10/24/2013 12:26 PM, Joe Btfsplk wrote:
> On 10/23/2013 8:04 AM, Lunar wrote:
>> Tor Weekly News October 23th, 2013
>>
>> “some circuits are going to be compromised, but it’s better to
>> increase your
>> probability of having no compromised circuits at the expense of also
>> _INCREASING THE PROPORTION_ of your circuits that will be compromised if
>> any of them are.”
> I read the paper - slept since then.
>
> Would someone please clarify this general statement & that part of the
> design concept?
>
> The statement in is a bit confusing.
> /"But profiling is, for most users, as bad as being traced all the
> time: they want to do something often without an attacker noticing,
> and the attacker noticing once is as bad as the attacker noticing more
> often."/
>
> How is being "noticed" once, perhaps for 15 seconds, visiting one
> website - that yields very little info, better than being noticed many
> times, over a long period?
>
> Is it that once an adversary correlates your machine (fingerprint) w/
> an originating IP & a Tor entry / exit, they could theoretically ID you?
>
> If so, doesn't that beg the question of why does TBB keep the same
> browser fingerprint from entry to exit?
> Why (have or allow TBB to) keep the same fingerprint over long
> periods, even if some of that data is spoofed, rather than TBB
> randomly change (spoof) the fingerprint, from end to end on one
> circuit and / or over time?
>
> One big problem as I understand, is a Tor user (specific browser on
> specific machine) is potentially identifiable from entry to exit, by
> having the same fingerprint.
> Why not change the fingerprint? Put on a "hat & glasses" or
> "different colored coat" part way through the circuit? TBB already
> spoofs SOME browser data - it just remains constant. Maybe other
> tracking issues completely over shadow this.
>
> Even if having TBB change fingerprints along a circuit and / or at
> other times doesn't solve all problems, could it be a *part* of
> reducing fingerprinting and / or tracking?
>
It looks like you grossly misunderstand how Tor works. The only node
that can see your browser "fingerprint" is the exit node. The problem
that Entry Guards are meant to solve is laid out in the very first
paragraph of the FAQ you linked:
> Tor (like all current practical low-latency anonymity designs) fails
> when the attacker can see both ends of the communications channel. For
> example, suppose the attacker controls or watches the Tor relay you
> choose to enter the network, and also controls or watches the website
> you visit. In this case, the research community knows no practical
> low-latency design that can reliably stop the attacker from
> correlating volume and timing information on the two sides.
In other words, if I can observe the pattern of traffic coming from your
IP address at a particular time, and simultaneously observe that pattern
at an exit node or website, then I can assume the traffic at the exit
belongs to you. It doesn't matter that there are multiple layers of
encryption along the way -- the attack doesn't look at the contents of
the traffic, just the volume and timing of it. Having Entry Guards
helps, but does not completely solve this problem.
In regards to being "noticed once" -- if the site you are visiting is
being watched by your government, then being noticed just once may be
cause for them to watch you more closely. If you're posting data to
wikileaks, having your government notice this could constitute a "very
bad thing". That is just one example.
--
tor-talk mailing list - tor-talk@xxxxxxxxxxxxxxxxxxxx
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk