On 03/03/15 16:54, Tariq Elahi wrote: > What I am getting at here is that we ought to figure out properties of > CRSs that all CRSs should have based on some fundamentals/theories > rather than what happens to be the censorship landscape today. The > future holds many challenges and changes and getting ahead of the game > will come from CRS designs that are resilient to change and do not > make strong assumptions about the operating environment. Responding to just one of many good points: I think your insight is the same one that motivated the creation of pluggable transports. That is, we need censorship resistance systems that are resilient to changes in the operating environment, and one way to achieve that is to separate the core of the CRS from the parts that are exposed to the environment. Then we can replace the outer parts quickly in response to new censorship tactics, without replacing the core. In my view this is a reasonable strategy because there's very little we can say about censorship tactics in general, as those tactics are devised by intelligent people observing and responding to our own tactics. If we draw a line around certain tactics and say, "This is what censors do", the censor is free to move outside that line. We've seen that happen time and time again with filtering, throttling, denial of service attacks, active probing, internet blackouts, and the promotion of domestic alternatives to blocked services. Censors are too clever to be captured by a fixed definition. The best we can do is to make strategic choices, such as protocol agility, that enable us to respond quickly and flexibly to the censor's moves. Is it alright to use a tactic that may fail, perhaps suddenly, perhaps silently, perhaps for some users but not others? I think it depends on the censor's goals and the nature of the failure. If the censor just wants to deny access to the CRS and the failure results in some users losing access, then yes, it's alright - nobody's worse off than they would've been without the tactic, and some people are better off for a while. If the censor wants to identify users of the CRS, perhaps to monitor or persecute them, and the failure exposes the identities of some users, it's harder to say whether using the tactic is alright. Who's responsible for weighing the potential benefit of access against the potential cost of exposure? It's tempting to say that developers have a responsibility to protect users from any risk - but I've been told that activists don't want developers to manage risks on their behalf; they want developers to give them enough information to manage their own risks. Is that true of all users? If not, perhaps the only responsible course of action is to disable risky features by default and give any users who want to manage their own risks enough information to decide whether to override the defaults. Cheers, Michael
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ tor-dev mailing list tor-dev@xxxxxxxxxxxxxxxxxxxx https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev