[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-relays] >23% Tor exit relay capacity found to be malicious - call for support for proposal to limit large scale attacks



On Sun, Jul 05, 2020 at 06:35:32PM +0200, nusenu wrote:
> To prevent this from happening over and over again
> I'm proposing two simple but to some extend effective relay requirements 
> to make malicious relay operations more expensive, time consuming,
> less sustainable and more risky for such actors:
> 
> a) require a verified email address for the exit or guard relay flag.
> (automated verification, many relays)
> 
> b) require a verified physical address for large operators (>=0.5% exit or guard probability)
> (manual verification, low number of operators). 

Thanks Nusenu!

I like the general goals here.

I've written up what I think would be a useful building block:
https://gitlab.torproject.org/tpo/metrics/relay-search/-/issues/40001

------------------------------------------------------------------------

Three highlights from that ticket that tie into this thread:

(A) Limiting each "unverified" relay family to 0.5% doesn't by itself
limit the total fraction of the network that's unverified. I see a lot of
merit in another option, where the total (global, network-wide) influence
from relays we don't "know" is limited to some fraction, like 50% or 25%.

(B) I don't know what you have in mind with verifying a physical address
(somebody goes there in person? somebody sends a postal letter and waits
for a response?), but I think it's trying to be a proxy for verifying
that we trust the relay operator, and I think we should brainstorm more
options for achieving this trust. In particular, I think "humans knowing
humans" could provide a stronger foundation.

More generally, I think we need to very carefully consider the extra
steps we require from relay operators (plus the work they imply for
ourselves), and what security we get from them. Is verifying that each
relay corresponds to some email address worth the higher barrier in
being a relay operator? Are there other approaches that achieve a better
balance? The internet has a lot of experience now on sybil-resistance
ideas, especially on ones that center around proving online resources
(and it's mostly not good news).

(C) Whichever mechanism(s) we pick for assigning trust to relays,
one gap that's been bothering me lately is that we lack the tools for
tracking and visualizing which relays we trust, especially over time,
and especially with the amount of network churn that the Tor network
sees. It would be great to have an easier tool where each of us could
assess the overall network by whichever "trust" mechanisms we pick --
and then armed with that better intuition, we could pick the ones that
are most ready for use now and use them to influence network weights.

------------------------------------------------------------------------

At the same time, we need to take other approaches to reduce the impact
and incentives for having evil relays in the network. For examples:

(1) We need to finish getting rid of v2 onion services, so we stop the
stupid arms race with threat intelligence companies who run relays in
order to get the HSDir flag in order to scrape legacy onion addresses.

(2) We need to get rid of http and other unauthenticated internet protocols:
I've rebooted this ticket:
https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/19850
with a suggestion of essentially disabling http connections when the
security slider is set to 'safer' or 'safest', to see if that's usable
enough to eventually make it the default in Tor Browser.

(3) We need bandwidth measuring techniques that are more robust and
harder to game, e.g. the design outlined in FlashFlow:
https://arxiv.org/abs/2004.09583

--Roger

_______________________________________________
tor-relays mailing list
tor-relays@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays