> On 23 Sep 2016, at 13:02, René Mayrhofer <rm@xxxxxxxxxx> wrote: > > Hi everybody, > > Unfortunately, it took a bit longer than expected, but here goes... > FWIW, after the recent update (with subsequent downtime), our exit node > is fully up and running again (including this patch) and relaying over > 1TB a day at the moment. Thanks for running a fast Tor Exit! > Am 2016-09-19 um 23:36 schrieb René Mayrhofer: >> Am 2016-09-19 um 20:24 schrieb grarpamp: >>> On Mon, Sep 19, 2016 at 9:14 AM, René Mayrhofer <rm@xxxxxxxxxx> wrote: >>>> Setup: Please note that our setup is a bit particular for reasons that >>>> we will explain in more detail in a later message (including a proposed >>>> patch to the current source which has been pending also because of the >>>> holiday situation...). Briefly summarizing, we use a different network >>>> interface for "incoming" (Tor encrypted traffic) than for "outgoing" >>>> (mostly clearnet traffic from the exit node, but currently still >>>> includes outgoing Tor relay traffic to other nodes). The outgoing >>>> interface has the default route associated, while the incoming interface >>>> will only originate traffic in response to those incoming connections. >>>> Consequently, we let our Tor node only bind to the IP address assigned >>>> to the incoming interface 193.171.202.146, while it will initiate new >>>> outgoing connections with IP 193.171.202.150. >>> There could be further benefit / flexibility in a 'proposed patch' that >>> would allow to take the incoming ORport traffic and further split >>> it outbound by a) OutboundBindAddressInt that which is going back >>> internal to tor, and b) OutboundBindAddressExt that which is going >>> out external to clearnet. Those two would include port specification >>> for optional use on the same IP. Binding to a particular source port is a bad idea - as the 4-tuple of: (source IP, source port, destination IP, destination port) must be unique, this would mean that the Exit could only make one connection per destination IP and port - which would prevent multiple clients querying the same website at the same time. >>> I do not recall if this splitting is >>> currently possible. No, it's not. >> That is exactly what we have patched our local Tor node to do, although >> with a different (slightly hacky, so the patch will be an RFC type) >> approach by marking real exit traffic with a ToS flag to leave the >> decision of what to do with it to the next layer (in our setup Linux >> kernel based policy routing on the same host). There may be a much >> better approach do achieve this goal. I plan on writing up our setup >> (and the rationale behind it) along with the "works for me but is not >> ready for upstream inclusion" patch tomorrow. I'm not sure if we want to tag Tor traffic with QoS values at Exits. Any tagging carries some degree of risk, because it makes traffic look more unique. I'm not sure how much of a risk QoS tagging represents. I would prefer to add config options OutboundBindAddressOR and OutboundBindAddressExit, which would default to OutboundBindAddress when not set. (And could be specified twice, once for IPv4, and once for IPv6.) The one concern I have about this is that Tor-over-Tor would stick out more, as it would look like Tor coming out the OutboundBindAddressExit IP. But we don't encourage Tor-over-Tor anyway. I'd recommend a patch that modifies this section in connection_connect to use OutboundBindAddressOR and OutboundBindAddressExit, preferably with the Exit/OR/(all) and IPv4/IPv6 logic refactored into its own function. if (!tor_addr_is_loopback(addr)) { const tor_addr_t *ext_addr = NULL; if (protocol_family == AF_INET && !tor_addr_is_null(&options->OutboundBindAddressIPv4_)) ext_addr = &options->OutboundBindAddressIPv4_; else if (protocol_family == AF_INET6 && !tor_addr_is_null(&options->OutboundBindAddressIPv6_)) ext_addr = &options->OutboundBindAddressIPv6_; if (ext_addr) { memset(&bind_addr_ss, 0, sizeof(bind_addr_ss)); bind_addr_len = tor_addr_to_sockaddr(ext_addr, 0, (struct sockaddr *) &bind_addr_ss, sizeof(bind_addr_ss)); if (bind_addr_len == 0) { log_warn(LD_NET, "Error converting OutboundBindAddress %s into sockaddr. " "Ignoring.", fmt_and_decorate_addr(ext_addr)); } else { bind_addr = (struct sockaddr *)&bind_addr_ss; } } } > Ideally, we would use 2 different providers to even further > compartmentalize "incoming" (i.e. encrypted Tor network) from "outgoing" > (for our exit node, mostly clearnet) traffic and make traffic > correlation harder (doesn't help against a global adversary as we know, > but at least a single ISP would not be able to directly correlate both > sides of the relay). Although we don't have two different providers at > this point, we still use two different network interfaces with > associated IP addresses (one advertised as the Tor node for incoming > traffic, and the other one with the default route assigned for outgoing > traffic). This sounds like an interesting setup. I'd be keen to see how it works out. Some Exit providers (typically with their own AS) peer with multiple other providers, because this makes it harder for a single network tap to capture all their traffic. Not quite the same as your setup, because OR and Exit traffic goes over all the links, rather than each going over a separate link. > ... > [The patch] > Currently, both (clearnet) exit traffic as well as encrypted Tor traffic > (to other nodes and hidden services) will use the outgoing interfaces, > as the Tor daemon simply creates TCP sockets and uses the default route > (which points at the outgoing interface). A patch as suggested by > grarpamp above could solve that issue. In the meantime, we have created > a slightly hacky patch as attached. The simplest way to only record exit > traffic and separate that from outgoing Tor traffic seemed to mark those > packets with a ToS value - which, as far as we can see, can be done with > a minimally invasive patch adding that option at a single point in > connection.c. At the moment, we use this ToS value in a filter > expression at the monitoring server to make sure that we do not analyze > outgoing Tor traffic. We also plan to also use it for policy routing > rules at the Linux kernel level to send outgoing Tor traffic back out > the "incoming" interface (to distinguish between Tor traffic and clear > traffic). When that works, the ToS flag can actually be removed again > before the packets leave the Tor node. Binding to different IP addresses can also be used for filtering and traffic redirection. Does having separate bind addresses for OR and Exit traffic work for your use case? > What do you think of that approach? Does that seem reasonable or would > there be a cleaner approach to achieve that kind of separation of exit > traffic from other traffic for analysis purposes? If this patch seems > useful, we can extend it to make this marking configurable for potential > upstream inclusion. > > Rene > (Head of the Institute for Networks and Security at JKU) T -- Tim Wilson-Brown (teor) teor2345 at gmail dot com PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n xmpp: teor at torproject dot org
Attachment:
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ tor-dev mailing list tor-dev@xxxxxxxxxxxxxxxxxxxx https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev