[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: First results of analysis

On 10/5/06, Alexander W. Janssen <yalla@xxxxxxxxxxxx> wrote:
i checked 1161 nodes in total.

269 of them where responsive exit-nodes, all behaving correctly.

or just behaving when you were looking? ;)

9 exitnodes where responsive, but their had some proxy installed which didn't
behave quite correct when you accessed a webpage with the notation
original.url.$nodename.exit; the error-messages varied from
"could not resolve" (looks like a DNS-leak to me) over "502 Bad Gateway"
through "502 Proxy Error".

if i were a rogue exit node i'd make sure that i leave all explicitly routed *.exit requests alone and only MITM the rest :)

most of the exits running proxies are probably doing so with honest
intent, a caching proxy for example, and not actively logging /
attacking traffic.

[ commodore64 which is running a squid proxy on all port 80 it is
exiting is probably doing so for caching purposes:
http://serifos.eecs.harvard.edu/cgi-bin/desc.pl?q=commodore64 ]

However, in my list of exit-nodes i couldn't find any host which showed the
described behaviour. My test-URL was http://www.linux-magazine.com/.

the malicious exits seem to be rare (that is, the explicit attempts to capture login / pass or route to phishing / spoofed sites).

a banking URL or web mail service would be a better test case for
these types of attacks.

So there is still some space left for discussion: Did i miss the "bad" or
"banned" exitnode?

it's been many weeks since i've seen an exit explicitly attacking traffic to get account info. so not a big problem to date... [and that exit is long gone - someone said they were doing it as a warning / proof-of-concept? i don't remember.]

However we probably should think if we should install some kind of early
warning system.

what you _really_ want is a reliable reputation metric for Tor nodes, which is actually a very hard problem to do well without opening up other attacks / vulnerabilities for the network as a whole.

I could imagine something like this: Every client checks once
per day some random website on the internet via, let's say, 10 random
exit-nodes and compares the results. If something is wrong the exitnode could
be signalled to a real human which could verify the claim.

How do you think about that?

- random pages will miss the targeted attacks on things like webmail or banking sites. - you may want to avoid checking with .exit so you don't tip off the rogue nodes you are making a test request. - comparing "wrong" results without lots of false positives is hard, as most site changes are legitimate (rolling ad content, news or other notices, etc).

some kind of automated testing for rogue exits would be useful though.