[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-bugs] #18361 [Tor Browser]: Issues with corporate censorship and mass surveillance



#18361: Issues with corporate censorship and mass surveillance
------------------------------------------+--------------------------
 Reporter:  ioerror                       |          Owner:  tbb-team
     Type:  enhancement                   |         Status:  new
 Priority:  High                          |      Milestone:
Component:  Tor Browser                   |        Version:
 Severity:  Critical                      |     Resolution:
 Keywords:  security, privacy, anonymity  |  Actual Points:
Parent ID:                                |         Points:
  Sponsor:                                |
------------------------------------------+--------------------------

Comment (by mmarco):

 Hello everybody.

 DISCLAIMER: I am by no means an expert in networks, computer science or
 any other technical aspect that might have to do with the subject here, so
 it is likely that what I am going to propose makes no sense at all (uf
 that is the case, just ignore it). I am just a Tor user that is
 particularly annoyed by CF capthas, since I do a big part of my browsing
 through orfox, and captchas are broken there. The only reason I have
 decided to share my thoughts here is because ioerror publicly uncouraged
 us to do so in Twitter.

 My (probably naive) proposal is the following:

 from my perspective, the problem here is that Tor, by dessign, makes it
 hard to distinguish bewteen the legitimate user from the abusive one (be
 it human or robot). CF's work is preciselly to distinguish between those
 two users, so we have a inconpatibility problem here. More preciselly, the
 problem is the lack of granularity: CF just sees one IP (the exit node)
 used by many users, both legitimate and abusive.

 So my propsal goes in the direction of adding more granularity, that is,
 distinguish between those different users. It would be something like
 this:

 - When the website receives a request from a Tor exit node, it creates an
 ephemeral .onion service (or gets one from a pool of pre-created ones),
 and answers with a 301 message that redirects to the .onion service (maybe
 with a delay to give time for the corresponmding circuits to be
 stablished).

 - Those ephemeral .onion services are killed when there is no session
 running on them anymore. Or when abusive behaviour is detected through
 them.

 - The connections through those .onion services now can be treated
 separatedly, thus allowing to treat separatedly the legitimate users from
 the abusive ones.


 I don't know if this solution is viable (maybe the overhead of creating
 the ephemeral .onion services is too much), but if it is, I think it would
 give an improvement over the current situation.

 From the user viewpoint, there is a delay when accessing for the first
 time to the website, but that sounds better than the captcha hell. After
 that, there is a slower browsing experience, but that is the usuall price
 you have to pay for using Tor .onion services.

 From the website viewpoint, you have an initial timeout for each
 connection, which might already discourage abusers. If some abuser wants
 to reuse the same .onion connection, you have a direct handle over it (you
 can then push the captha hell, or even directly kill the connection).

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/18361#comment:60>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs