[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-dev] bittorrent based pluggable transport



I agree with Michael's idea of core parts vs replaceable parts (such as the type of cover traffic) since I feel much of the censorship circumvention still relies on how the landscape looks like and that there isn't a clear cut, theory-based solution to the problem (in the way you argue for example that a certain end-to-end encryption protocol is correct - you can do proper formal reasoning about that).

What I feel is that at this point we lack a more solid way of evaluating how good is a pluggable transport.

I would like to thank all for the feedback and do a summary of the ideas I've gathered.

## Goals

My goal at this point with bit-smuggler is to figure out what are the next steps to bring value with it.

* Does it have potential to be used as a Tor PT, by incorporating ideas to make it better? If that is the case i would gladly continue work on it.

* Or are there intrinsic limitations of bittorrent as a cover traffic which make it unsuitable for the security standards of a Tor PT? In this case, maybe it can have a different use case (penetrate a censorship firewall without getting caught in real-time, but with an acceptable risk of being later on detected with a delayed analysis)


In the latter case, I guess it would be useful to document my work for future reference when working on other PTs since I use some techniques that may be reused/avoided in the future based on whether they are proven to be good/bad (eg. attempting to tamper with traffic generated by a real-world implementation of the protocol through proxying)

## ÂDiscussion summary

David thinks that it is reasonable to assume bittorrent won't be blocked by the censor and raises some important questions about how my bit-smuggler may create network traffic patterns that are unusual and therefore fingerprintable. I made a list of the ones I can think of in a previous message, and it's up for discussions which may compromise a bit-smuggler connection in real-time and therefore need to be mitigated or which won't do that and are acceptable.

Michael support the idea that an approach where we adapt to censor landscape and have some core concepts/designs that are the same for all PTs and some changeable parts to adapt to circumstances. His argument is build starting from Tariq message, who states the need for PTs that don't just work "sometimes" and he argues that Tariq's points are ideas that got the PT project started in the first place.

Leeroy stresses that the following aspects problematic:

* bittorrent spec breaking due to the fact that in the bittorrent message exchange between the PT server and client using bit-smuggler, the data being exchanged doesn't match the correct checksums stated in the .torrent file

* bittorrent having no extra layer of encryption, bit-smuggler relies on steganography which is harder to get right (as opposed to meek where everything happens under the cover of an https connection)

* plausible deniability is compromised - if a user's bittorrent traffic is captured, reconstructed and found to have many checksum failures it can be argued he was using bit-smuggler

I am not sure I completely understand Leeroy's strategy for breaking undetectability but here's a non-real time one that can work.

A simple approach is this: suppose that the adversary would just do a packet capture for all bittorrent traffic crossing national borders in an interval of 8 hours. Then it performs TCP reconstruction, reconstructs the BitTorrent message exchange for all those captures, fetches the corresponding torrent file, computes hashes and sees a large number of hash failures -> it's bit-smuggler. So all active PT servers and clients during that interval of time would be caught (with a delay). By looking at the IPs of those broken bittorrent streams, it can then detect the IP of the bridge (since many IPs connect to 1 particular IP, it's like a sink). It can then either wait passively to see the activity of the bridge, now that it identified it, and see what ppl connect to it, or just go ahead and block it.


If anything above is inaccurate, please let me know, that is my current understanding of the discussion.

## Trade-offs and use casesÂ

At this point i believe that Bit-smuggler can be made to work in situations where the user requires to penetrate a censorship firewall without being cut down in real-time, get a good throughput upstream and downstream and have data confidentiality. In support of it come the properties of high volume (harder to monitor)

However, it's very likely that given enough investment of resources, a censor can devise a system with delayed non-real time analysis where he detects which connections were bitsmuggler and which were not and, there are strong reasons to believe that even though the data is encrypted/looks like random, a high a occurrence of detected hash fails is enough to break plausible deniability (aka argue in court that the user used bit-smuggler)

I believe there are situations where this is an acceptable trade-off, eg. an adversary that stops at just cutting VPN connections but doesn't pursue users of VPN any further.If other PTs with better properties are unusable in some situation (eg. it's cover protocol is blocked, look-like-nothing protocols fail because of protocol white-listing) this can be a fall-back solution with this tradeoff.

Would like to hear your thoughts on the potential use cases and further steps, and please let me know about what things are unclear so i can explain.

Thank you!
Dan

On Sat, Mar 7, 2015 at 3:56 AM, Michael Rogers <michael@xxxxxxxxxxxxxxxx> wrote:
On 03/03/15 16:54, Tariq Elahi wrote:
> What I am getting at here is that we ought to figure out properties of
> CRSs that all CRSs should have based on some fundamentals/theories
> rather than what happens to be the censorship landscape today. The
> future holds many challenges and changes and getting ahead of the game
> will come from CRS designs that are resilient to change and do not
> make strong assumptions about the operating environment.

Responding to just one of many good points: I think your insight is the
same one that motivated the creation of pluggable transports. That is,
we need censorship resistance systems that are resilient to changes in
the operating environment, and one way to achieve that is to separate
the core of the CRS from the parts that are exposed to the environment.
Then we can replace the outer parts quickly in response to new
censorship tactics, without replacing the core.

In my view this is a reasonable strategy because there's very little we
can say about censorship tactics in general, as those tactics are
devised by intelligent people observing and responding to our own
tactics. If we draw a line around certain tactics and say, "This is what
censors do", the censor is free to move outside that line. We've seen
that happen time and time again with filtering, throttling, denial of
service attacks, active probing, internet blackouts, and the promotion
of domestic alternatives to blocked services. Censors are too clever to
be captured by a fixed definition. The best we can do is to make
strategic choices, such as protocol agility, that enable us to respond
quickly and flexibly to the censor's moves.

Is it alright to use a tactic that may fail, perhaps suddenly, perhaps
silently, perhaps for some users but not others? I think it depends on
the censor's goals and the nature of the failure. If the censor just
wants to deny access to the CRS and the failure results in some users
losing access, then yes, it's alright - nobody's worse off than they
would've been without the tactic, and some people are better off for a
while.

If the censor wants to identify users of the CRS, perhaps to monitor or
persecute them, and the failure exposes the identities of some users,
it's harder to say whether using the tactic is alright. Who's
responsible for weighing the potential benefit of access against the
potential cost of exposure? It's tempting to say that developers have a
responsibility to protect users from any risk - but I've been told that
activists don't want developers to manage risks on their behalf; they
want developers to give them enough information to manage their own
risks. Is that true of all users? If not, perhaps the only responsible
course of action is to disable risky features by default and give any
users who want to manage their own risks enough information to decide
whether to override the defaults.

Cheers,
Michael


_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev