[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-bugs] #18361 [Tor Browser]: Issues with corporate censorship and mass surveillance
#18361: Issues with corporate censorship and mass surveillance
------------------------------------------+--------------------------
Reporter: ioerror | Owner: tbb-team
Type: enhancement | Status: new
Priority: High | Milestone:
Component: Tor Browser | Version:
Severity: Critical | Resolution:
Keywords: security, privacy, anonymity | Actual Points:
Parent ID: | Points:
Sponsor: |
------------------------------------------+--------------------------
Comment (by cypherpunks):
Replying to [comment:5 willscott]:
> Replying to [comment:1 marek]:
> > > There are CDN/DDoS companies in the internet that provide spam
protection for their customers. To do this they use captchas to prove that
the visitor is a human. Some companies provide protection to many
websites, therefore visitor from abusive IP address will need to solve
captcha on each and all domains protected. Let's assume the CDN/DDoS don't
want to be able to correlate users visiting multiple domains. Is it
possible to prove that a visitor is indeed human, once, but not allow the
CDN/DDoS company to deanonymize / correlate the traffic across many
domains?
> >
> > In other words: is it possible to provide a bit of data (i'm-a-human)
tied to the browsing session while not violating anonymity.
> >
> >
> This sounds very much like something that could be provided through the
use of zero-knowledge proofs. It doesn't seem clear to me that being able
to say "this is an instance of tor which has already answered a bunch of
captcha's" is actually useful. I think the main problem with captchas at
this point is that robots are just about as good at answering them as
humans. Apparently robots are worse than humans at building up tracked
browser histories. That seems like a harder property for a tor user to
prove.
>
> What sort of data would qualify as an 'i'm a human' bit?
Let's be clear on one point: humans do not request web pages. User-Agents
request web pages. When people talk about "prove you're a human", what
they really mean is "prove that your User-Agent behaves the way we expect
it to".
CloudFlare expect that "good" User-Agents should leave a permanent trail
of history between all sites across the web. Humans who decide they don't
want this property, and use a User-Agent such as Tor Browser fall outside
of CloudFlare's conception of how User-Agents should behave (which
conception includes neither privacy nor anonymity), and are punished by
CloudFlare accordingly.
It might be true that there is some kind of elaborate ZKP protocol that
would allow a user to prove to CloudFlare that their User-Agent behaves
the way CloudFlare demands, without revealing all of the user's browsing
history to CloudFlare and Google. Among other things, this would require
CloudFlare to explicitly and precisely describe both their threat model
and their definition of 'good behaviour', which as far as I know they have
never done.
However, it is not the Tor Project's job to perform free labour for a
censor. If CloudFlare is actually interested in solving the problem, then
perhaps the work should be paid for by the $100MM company that created the
problem, not done for free by the nonprofit and community trying to help
the people who suffer from it.
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/18361#comment:29>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs