[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-relays] >23% Tor exit relay capacity found to be malicious - call for support for proposal to limit large scale attacks



Hi all,

Le Mon, Jul 06, 2020 at 07:07:19AM -0400, Roger Dingledine écrivait :
> 
> Three highlights from that ticket that tie into this thread:
> 
> (A) Limiting each "unverified" relay family to 0.5% doesn't by itself
> limit the total fraction of the network that's unverified. I see a lot of
> merit in another option, where the total (global, network-wide) influence
> from relays we don't "know" is limited to some fraction, like 50% or 25%.

That's a great idea, but how do you say you "know" a relay ? And in this
case, I guess that this number should stay low. Here we have a case of
20% exit probability by this group of nodes, which is already huge. And
we don't necessarily know if they have as well a part of the entry
nodes.

> 
> (B) I don't know what you have in mind with verifying a physical address
> (somebody goes there in person? somebody sends a postal letter and waits
> for a response?), but I think it's trying to be a proxy for verifying
> that we trust the relay operator, and I think we should brainstorm more
> options for achieving this trust. In particular, I think "humans knowing
> humans" could provide a stronger foundation.
> 
> More generally, I think we need to very carefully consider the extra
> steps we require from relay operators (plus the work they imply for
> ourselves), and what security we get from them. Is verifying that each
> relay corresponds to some email address worth the higher barrier in
> being a relay operator? Are there other approaches that achieve a better
> balance? The internet has a lot of experience now on sybil-resistance
> ideas, especially on ones that center around proving online resources
> (and it's mostly not good news).

Two points here :
 * We should give a read to the sybil-resistance litterature around to
   see what exists and how it could be adapted to Tor. I know that a lot
   of work has already been done, but maybe some extended defenses are
   required at this point.
 * A suggestion would be to build a web-of-trust between relay
   operators, using, I don't know, PGP or something like this, and
   organize signing parties at hackers/Torproject/dev events?
   For example, I'm every year at FOSDEM in Brussels, and I attend the
   bird of feathers for Tor when it exists. However, it would prevent
   people who want to keep complete anonymity from contributing to Tor,
   which is a bad point. Maybe this lights something up in your brains?

> 
> (C) Whichever mechanism(s) we pick for assigning trust to relays,
> one gap that's been bothering me lately is that we lack the tools for
> tracking and visualizing which relays we trust, especially over time,
> and especially with the amount of network churn that the Tor network
> sees. It would be great to have an easier tool where each of us could
> assess the overall network by whichever "trust" mechanisms we pick --
> and then armed with that better intuition, we could pick the ones that
> are most ready for use now and use them to influence network weights.
> 
> ------------------------------------------------------------------------
> 
> At the same time, we need to take other approaches to reduce the impact
> and incentives for having evil relays in the network. For examples:
> 
> (1) We need to finish getting rid of v2 onion services, so we stop the
> stupid arms race with threat intelligence companies who run relays in
> order to get the HSDir flag in order to scrape legacy onion addresses.

Good point. Do you think speeding the process is possible? The deadline
is in more than one year from now, which seems a pretty long time. Or
maybe is it to synchronize with new versions of the linux/BSD
distributions?

> 
> (2) We need to get rid of http and other unauthenticated internet protocols:
> I've rebooted this ticket:
> https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/19850
> with a suggestion of essentially disabling http connections when the
> security slider is set to 'safer' or 'safest', to see if that's usable
> enough to eventually make it the default in Tor Browser.

+1, nothing more to say.

> 
> (3) We need bandwidth measuring techniques that are more robust and
> harder to game, e.g. the design outlined in FlashFlow:
> https://arxiv.org/abs/2004.09583

I have seen that there is a proposal, and a thread on tor-dev that died
in April (lockdown maybe?), maybe we should launch again the discussions
around this technique?


-- 
Guinness
_______________________________________________
tor-relays mailing list
tor-relays@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays