Hello friends,
The question of what the cheapest attack is can indeed be estimated by looking at market prices for the required resources. Your cost estimate of 3.72 USD/Gbps/month for bandwidth seems off by two orders of magnitude. The numbers I gave ($2/IP/month and $500/Gbps/month) are the amounts currently charged by my US hosting provider. At the time that I shopped around (which was in 2015), it was by far the best bandwidth cost that I was able to find, and those costs haven’t changed much since then. Currently on OVH the best I could find for hosting just now was $93.02/per month for 250Mbps unlimited (https://www.ovh.co.uk/dedicated_servers/hosting/1801host01.xml). This yields $372.08/Gbps/month. I am far from certain that this is the best price that one could find - please do point me to better pricing if you have it! I also just looked at Hetzter - another major Tor-friendly hosting provider. The best I could find was 1Gbps link capped at 100TB/month for $310.49 (https://wiki.hetzner.de/index.php/Traffic/en). 1Gbps sustained upload is 334.8Terabytes (i.e. 1e12 bytes) for a 31-day month. If you exceed that limit, you can arrange to pay $1.24/TB. Therefore I would estimate the cost to be $601.64/Gbps/month. Again, I maybe missing an option more tailored to a high-traffic server, and I would be happy to be pointed to it :-) Moreover, European bandwidth costs are among the lowest in the world. Other locations are likely to have even higher bandwidth costs (Australia, for example, has notoriously high bandwidth costs).
I do agree that the market changes, and in fact I expect the cost fo IPs to plummet as the shift to IPv6 becomes pervasive. The current high cost of IPv4 addresses is due to their recent scarcity. In any case, a good question to ask would be how Tor should adjust to changes in market pricing over time.
I agree that the cost of compromising machines is unclear. However, we should guess, and the business of 0-days has provided some signals of their value in terms of their price. 0-days for the Tor software stack are expensive, as, for security reasons, (well-run) Tor relays run few services other than the tor process. I haven’t seen great data on Linux zero-days, but recently a Windows zero-day (Windows being the second most-common Tor relays OS) appeared to cost $90K (https://www.csoonline.com/article/3077447/security/cost-of-a-windows-zero-day-exploit-this-one-goes-for-90000.html). Deploying a zero-day does impose a cost, as it increases the chance of that exploit being discovered and its value lost. Therefore, such exploits are likely to be deployed only on high-value targets. I would argue that Tor relays are unlikely to be such a target because it is so much cheaper to simply run your own relays. An exception could be a specific targeted investigation in which some suspect is behind a known relay (say, a hidden service behind a guard), because running other relays doesn’t help dislodge the target from behind its existing guard.
This doesn’t seem like a good argument to me: “bots that become guards must have high availability, and thus they likely have high bandwidth”. How many bots would become guards in the first place? And why would availability (by which I understand you to mean uptime) imply bandwidth? The economics matter here, and I don’t know too much about botnet economics, but my impressions is that they generally include many thousands of machines and that each bot is generally quickly shut down by its service provider once it starts spewing traffic (i.e. acting as a high-bandwidth Tor relay). Thus waterfilling could benefit botnets by giving them more clients to attack while providing a small amount of bandwidth that falls below the radar of their ISP. This is a speculative argument, I admit, but seems to me to be somewhat more logical than the argument you outlined.
Why is running a large number of relays more noticeable than running a high-bandwidth relay? Actually, it seems, if anything, *less* noticeable. An attacker could even indicate that all the relays are in the same family, and there is no Tor policy that would kick them out of the network for being “too large” of a family. If Tor wants to limit the size of single entities, then they would have to kick out some large existing families (Team Cymru, torservers.net, and the Chaos Communicration Congress come to mind), and moreover such a policy could apply equally well to total amounts of bandwidth as to total number of relays.
This suggestion of applying waterfilling to individual ASes is intriguing, but would require some a more developed design and argument. Would the attacker model be one that has a fixed cost to compromise/observe a given AS?
I disagree that uniform relay selection is a sound design principle. Instead, one should consider various likely attackers and consider what design maximizes the attack cost (or maybe maximizes the minimum design cost among likely attackers). In the absence of detailed attacker information, a good design principle might be for clients to choose “diverse” relays, where diversity should take into account country, operator, operating system, AS, IXP connectivity, among other things. Best, Aaron |
_______________________________________________ tor-dev mailing list tor-dev@xxxxxxxxxxxxxxxxxxxx https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev