On Sun, May 16, 2010 at 9:26 PM, John M. Schanck
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
On Sat, May 15, 2010 at 11:58:44PM -0400, Roger Dingledine wrote:
> On Sat, May 15, 2010 at 06:37:54PM -0700, Damian Johnson wrote:
> > Hmmm... so we aren't interested in having a clearer definition of what makes
> > up a bad exit? From the following I thought this is something we were
> > interested in John looking into:
> >
> > "On the bright side though, it's looking good that we'll be able to get a
> > google summer of code student to revive Mike Perry's "Snakes on a Tor"
> > project, and hopefully that means we will a) have some automated scans
> > looking for really obviously broken relays, and *b) build a clearer policy
> > about what counts as badexit and what doesn't, so we can react faster
> > next time.*" [0]
>
> Good point. I didn't mean to discourage him from working on more than
> one direction at once. I suspect that working on good clean tests
> that don't produce false positives, and setting up the infrastructure
> to automatically launch them and gather results, is something that can
> actually be clearly accomplished and finished; whereas trying to sort out
> the right balance between "subtly not working right" and "still worth
> letting ordinary users exit from" is a rat-hole that may well lead to
> madness plus no useful results. In short, it sounds like both are worth
> pursuing in parallel. :)
I definitely think both can be pursued in parallel. I've set up a blog
for documenting my progress, http://anomos.info/~john/gsoc, the most
recent post (5/17/2010) is about the goals of SoaT, and the definition
of BadExit. One thing I would really like help with is compiling a
list of reasons for which nodes have been given the BadExit flag.
I've collected information on seven cases where a BadExit flag was
given, or suggested, but I'm sure there are others.
> > This strikes me as something very easy he could do to both:
> > 1. start integrating with the community more (all the gsoc students have
> > been very quiet so far, hence I'm trying to encourage him to spark a
> > discussion on or-talk)
Had to finish up finals ;-)
I'm starting to build a list of the attacks SoaT should defend against;
I'm confident that we can be completely public about what those attacks
are, as well as what our defenses are - although I'd like Mike's input
on that. The secret config file you mention is less about obscuring which
tests are being run, and more about not publishing the fingerprintable
characteristics of the scanner (rates at which certain operations occur, etc).
> Once we have that list would be a good time to solicit opinions for
> what's missing from it or whether it's doing tests is a suboptimal way.
Agreed. There's a partial list on the blog now, I'll take comments there
for a few days and then bring the discussion back here to get an idea of
the relative importance of each attack and strategies for conducting the
tests.
Cheers!
John
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEAREDAAYFAkvwxWkACgkQke2DTaHTnQkZ0gCeOTmal1sGHpnA/oYZBRF3kVUo
ghQAniwE/y5O1WeA01Uk54Nkkjj99ZOE
=mjBs
-----END PGP SIGNATURE-----