On 10/23/2013 8:04 AM, Lunar wrote:
Tor Weekly News October 23th,
2013
âsome circuits are going to be compromised, but itâs better to
increase your
probability of having no compromised circuits at the expense of also
_INCREASING THE PROPORTION_ of your circuits that will be compromised
if
any of them are.â
I read the paper - slept since then.
Would someone please clarify this general statement & that part of
the design concept?
The statement in https://www.torproject.org/docs/faq#EntryGuards is a
bit confusing.
/"But profiling is, for most users, as bad as being traced all the
time: they want to do something often without an attacker noticing,
and the attacker noticing once is as bad as the attacker noticing more
often."/
How is being "noticed" once, perhaps for 15 seconds, visiting one
website - that yields very little info, better than being noticed many
times, over a long period?
Is it that once an adversary correlates your machine (fingerprint) w/
an originating IP & a Tor entry / exit, they could theoretically ID
you?
If so, doesn't that beg the question of why does TBB keep the same
browser fingerprint from entry to exit?
Why (have or allow TBB to) keep the same fingerprint over long
periods, even if some of that data is spoofed, rather than TBB
randomly change (spoof) the fingerprint, from end to end on one
circuit and / or over time?
One big problem as I understand, is a Tor user (specific browser on
specific machine) is potentially identifiable from entry to exit, by
having the same fingerprint.
Why not change the fingerprint? Put on a "hat & glasses" or
"different colored coat" part way through the circuit? TBB already
spoofs SOME browser data - it just remains constant. Maybe other
tracking issues completely over shadow this.
Even if having TBB change fingerprints along a circuit and / or at
other times doesn't solve all problems, could it be a *part* of
reducing fingerprinting and / or tracking?