Hi Kristian,Thanks for the screenshot. Nice Machine! Not everyone is as fortunate as you when it comes to resources for their Tor deployments. While a cpu affinity option isn't high on the priority list, as you point out, many operating systems do a decent job of load management and there are third-party options available for cpu affinity, but it might be helpful for some to have an application layer option to tune their implementations natively.As an aside... Presently, are you using a single, public address with many ports or many, public addresses with a single port for your Tor deployments? Have you ever considered putting all those Tor instances behind a single, public address:port (fingerprint) to create one super bridge/relay? I'm just wondering if it makes sense to conserve and rotate through public address space to stay ahead of the blacklisting curve?Also... Do you mind disclosing what all your screen instances are for? Are you running your Tor instances manually and not in daemon mode? "Inquiring minds want to know." 😁As always... It is great to engage in dialogue with you.Respectfully,GaryOn Tuesday, December 28, 2021, 1:39:31 PM MST, abuse@xxxxxxxxxxxxx <abuse@xxxxxxxxxxxxx> wrote:Hi Gary,why would that be needed? Linux has a pretty good thread scheduler imo and will shuffle loads around as needed.Even Windows' thread scheduler is quite decent these days and tools like "Process Lasso" exist if additional fine tuning is needed.Attached is one of my servers running multiple tor instances on a 12/24C platform. The load is spread quite evenly across all cores.Best Regards,KristianDec 27, 2021, 22:08 by tor-relays@xxxxxxxxxxxxxxxxxxxx:BTW... I just fact-checked my post-script and the cpu affinity configuration I was thinking of is for Nginx (not Tor). Tor should consider adding a cpu affinity configuration option. What happens if you configure additional Tor instances on the same machine (my Tor instances are on different machines) and start them up? Do they bind to a different or the same cpu core?Respectfully,GaryOn Monday, December 27, 2021, 2:44:59 PM MST, Gary C. New via tor-relays <tor-relays@xxxxxxxxxxxxxxxxxxxx> wrote:David/Roger:Search the tor-relay mail archive for my previous responses on loadbalancing Tor Relays, which I've been successfully doing for the past 6 months with Nginx (it's possible to do with HAProxy as well). I haven't had time to implement it with a Tor Bridge, but I assume it will be very similar. Keep in mind it's critical to configure each Tor instance to use the same DirectoryAuthority and to disable the upstream timeouts on Nginx/HAProxy.Happy Tor Loadbalancing!Respectfully,GaryP.S. I believe there's a torrc config option to specify which cpu core a given Tor instance should use, too.On Monday, December 27, 2021, 2:00:50 PM MST, Roger Dingledine <arma@xxxxxxxxxxxxxx> wrote:On Mon, Dec 27, 2021 at 12:05:26PM -0700, David Fifield wrote:> I have the impression that tor cannot use more than one CPU core???is that> correct? If so, what can be done to permit a bridge to scale beyond> 1×100% CPU? We can fairly easily scale the Snowflake-specific components> around the tor process, but ultimately, a tor client process expects to> connect to a bridge having a certain fingerprint, and that is the part I> don't know how to easily scale.>> * Surely it's not possible to run multiple instances of tor with the> same fingerprint? Or is it? Does the answer change if all instances> are on the same IP address? If the OR ports are never used?Good timing -- Cecylia pointed out the higher load on Flakey a few daysago, and I've been meaning to post a suggestion somewhere. You actually*can* run more than one bridge with the same fingerprint. Just set itup in two places, with the same identity key, and then whichever one theclient connects to, the client will be satisfied that it's reaching theright bridge.There are two catches to the idea:(A) Even though the bridges will have the same identity key, they won'thave the same circuit-level onion key, so it will be smart to "pin"each client to a single bridge instance -- so when they fetch the bridgedescriptor, which specifies the onion key, they will continue to usethat bridge instance with that onion key. Snowflake in particular mightalso want to pin clients to specific bridges because of the KCP state.(Another option, instead of pinning clients to specific instances,would be to try to share state among all the bridges on the backend,e.g. so they use the same onion key, can resume the same KCP sessions,etc. This option seems hard.)(B) It's been a long time since anybody tried this, so there might besurprises. :) But it *should* work, so if there are surprises, we shouldtry to fix them.This overall idea is similar to the "router twins" idea from the distantdistant past:> * Removing the fingerprint from the snowflake Bridge line in Tor Browser> would permit the Snowflake proxies to round-robin clients over several> bridges, but then the first hop would be unauthenticated (at the Tor> layer). It would be nice if it were possible to specify a small set of> permitted bridge fingerprints.This approach would also require clients to pin to a particular bridge,right? Because of the different state that each bridge will have?--Roger_______________________________________________tor-relays mailing list_______________________________________________tor-relays mailing list
_______________________________________________ tor-relays mailing list tor-relays@xxxxxxxxxxxxxxxxxxxx https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays