Thus spake Gregory Maxwell (gmaxwell@xxxxxxxxx): > On Sun, Aug 29, 2010 at 3:54 AM, Mike Perry <mikeperry@xxxxxxxxxx> wrote: > [snip] > > Any classifier needs enough bits to differentiate between two > > potentially coincident events. This is also why Tor's fixed packet > > size performs better against known fingerprinting attacks. Because > > we've truncated the lower 8 bits off of all signatures that use size > > as a feature in their fingerprint classifiers. They need to work to > > find other sources of bits. > > If this is so??? that people are trying to attack tor with size > fingerprinting but failing because of the size quantization and then > failing to publish because they got a non-result??? then it is something > which very much needs to be made public. According to the research groups Roger has talked to, yes, this is the case. > Not only might future versions of tor make different design decisions > with respect to cell size, other privacy applications would benefit > from even a no-result in this area. The problem though is that it's hard to publish a no-result, unless its pretty a pretty surprising no-result, or at least a quantifiable no-result. It's not terribly surprising that existing fingerprinting techniques do not work well "out of the box" against Tor, because a lot less information is available during a Tor session, and there is a lot more noise (due to more than just the 512-byte cell size). If someone actually worked hard and took all these things into account, and still had a result that said "Fingerprinting on Tor does not usually work unless you have fewer than than X numbers of targets and/or event rates below Y", it still probably would belong more in a tech report than a full academic paper, unless it also came with information-theoretic proofs that showed exactly why their implementation got the results it did. -- Mike Perry Mad Computer Scientist fscked.org evil labs
Attachment:
pgp7owtVaXTbW.pgp
Description: PGP signature