A simple approach is this: suppose that the adversary would just do a packet capture for all bittorrent traffic crossing national borders in an interval of 8 hours. Then it performs TCP reconstruction, reconstructs the BitTorrent message exchange for all those captures, fetches the corresponding torrent file, computes hashes and sees a large number of hash failures -> it's bit-smuggler. So all active PT servers and clients during that interval of time would be caught (with a delay). By looking at the IPs of those broken bittorrent streams, it can then detect the IP of the bridge (since many IPs connect to 1 particular IP, it's like a sink). It can then either wait passively to see the activity of the bridge, now that it identified it, and see what ppl connect to it, or just go ahead and block it.
If anything above is inaccurate, please let me know, that is my current understanding of the discussion.
## Trade-offs and use casesÂ
At this point i believe that Bit-smuggler can be made to work in situations where the user requires to penetrate a censorship firewall without being cut down in real-time, get a good throughput upstream and downstream and have data confidentiality. In support of it come the properties of high volume (harder to monitor)
However, it's very likely that given enough investment of resources, a censor can devise a system with delayed non-real time analysis where he detects which connections were bitsmuggler and which were not and, there are strong reasons to believe that even though the data is encrypted/looks like random, a high a occurrence of detected hash fails is enough to break plausible deniability (aka argue in court that the user used bit-smuggler)
On 03/03/15 16:54, Tariq Elahi wrote:
> What I am getting at here is that we ought to figure out properties of
> CRSs that all CRSs should have based on some fundamentals/theories
> rather than what happens to be the censorship landscape today. The
> future holds many challenges and changes and getting ahead of the game
> will come from CRS designs that are resilient to change and do not
> make strong assumptions about the operating environment.
Responding to just one of many good points: I think your insight is the
same one that motivated the creation of pluggable transports. That is,
we need censorship resistance systems that are resilient to changes in
the operating environment, and one way to achieve that is to separate
the core of the CRS from the parts that are exposed to the environment.
Then we can replace the outer parts quickly in response to new
censorship tactics, without replacing the core.
In my view this is a reasonable strategy because there's very little we
can say about censorship tactics in general, as those tactics are
devised by intelligent people observing and responding to our own
tactics. If we draw a line around certain tactics and say, "This is what
censors do", the censor is free to move outside that line. We've seen
that happen time and time again with filtering, throttling, denial of
service attacks, active probing, internet blackouts, and the promotion
of domestic alternatives to blocked services. Censors are too clever to
be captured by a fixed definition. The best we can do is to make
strategic choices, such as protocol agility, that enable us to respond
quickly and flexibly to the censor's moves.
Is it alright to use a tactic that may fail, perhaps suddenly, perhaps
silently, perhaps for some users but not others? I think it depends on
the censor's goals and the nature of the failure. If the censor just
wants to deny access to the CRS and the failure results in some users
losing access, then yes, it's alright - nobody's worse off than they
would've been without the tactic, and some people are better off for a
while.
If the censor wants to identify users of the CRS, perhaps to monitor or
persecute them, and the failure exposes the identities of some users,
it's harder to say whether using the tactic is alright. Who's
responsible for weighing the potential benefit of access against the
potential cost of exposure? It's tempting to say that developers have a
responsibility to protect users from any risk - but I've been told that
activists don't want developers to manage risks on their behalf; they
want developers to give them enough information to manage their own
risks. Is that true of all users? If not, perhaps the only responsible
course of action is to disable risky features by default and give any
users who want to manage their own risks enough information to decide
whether to override the defaults.
Cheers,
Michael
_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
_______________________________________________ tor-dev mailing list tor-dev@xxxxxxxxxxxxxxxxxxxx https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev