[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: following on from today's discussion
Thus spake Matej Kovacic (matej.kovacic@xxxxxxxxx):
> I was thinking about a solution to prevent traffic injection in
> non-encrypted public websites. What about having TWO conection open and
> do some kind of checking if the content is the same (maybe access the
> content from two different locations and do some MD5 check). I know the
> idea is hard to implement, since website can serve different content for
> each location or every second, and this could also mean double load of
> Tor network. But maybe someone will develop my idea into the usable
> form... If not, feel free to drop it away.
So what about a stochastic solution instead:
1. Create some listing of exe files, commonly vulnerable doc formats,
and SSL sites that changes periodically, possibly scraped off google
2. Use some perl glue to go through the Tor node list and try each exit
to make sure they aren't modifying this data.
a. Certs can be checked byte by byte to make sure they don't differ
across exit nodes.
b. Images, doc files, ppt files, exes can be verified by multiple
A handful of hosts could run this thing and publish their results,
perhaps along with some other manually created list of undesirable
I think this is doable with perl, the Tor control port, wget, md5sum,
tsocks and 'openssl s_client', and is a lot more efficient than having
everyone verify everything always. The testing can be periodic, can
manually associate streams with connections so exits are known, etc.
If I'm not distracted by something shiny in the next couple days I'll
give it a shot. I mean, we've got to get these motherfuckin snakes off
this motherfuckin plane.
Mad Computer Scientist
fscked.org evil labs