[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-talk] Massive Bandwidth Onion Services



Hi George!

On 20 December 2016 at 14:03, George Kadianakis <desnacked@xxxxxxxxxx>
wrote:

> BTW and to slightly diverge the topic, I really like this experiment and
> its blazing fast results, but I still get a weird feeling when I see
> that to start functioning it requires 432 descriptors uploaded to the
> HSDir system (72 daemons * 6 descriptors).


There is a hypothetical way around that, but Donncha will need to comment,
too.

I don't _feel_ that this is a problem which needs to be solved in the next
2 years.

What _could_ happen in the future, is.

1) the 72 workers could each set up an IP but *not* publish it in a
descriptor, and then

2) the master daemon could poll the 72 workers for their list of current
IPs via a backchannel, and then...

3) construct the "master descriptor" from that information.

This has pros and cons.

It makes the architecture more "active" - using backchannels and lots of
chattiness - which is great for physically geolocated clusters - ie:
everything in one rack - but it makes matters *really* complicated for the
kinds of physically distributed clustering that Tor would be awesome and
cutting-edge for.

For one thing, in such a scenario, it would not be possible to use the
slave onion addresses to coordinate with each other; so it would make it
really hard to build a high-availability solution like a Horcrux* (see
below)

It's probably easier to think of this not as "432 descriptors" but instead
as "73 Hidden Services" - comprising 72x "physical" onion addresses, and 1x
"virtual" onion address using OnionBalance. This is much the same as the
Physical-versus-VIP architectures which one sees in other load-balancing
architectures.

It also resonates with the slides I posted in the thread, here:

https://twitter.com/AlecMuffett/status/802161730591793152

...arguing that Onion addresses are the Layer-2 addresses of a new network
space.

With such an approach, rather than seeing Onion addresses / HSDirs as
scarce resources, we would be better to be engineering them to become
abundant and cheap, for they will become as popular and as ephemeral as any
other Layer-2 address.


tl;dr - I am doing the bonkers stuff so that nobody else has to. 72 is
above-and-beyond, especially since Facebook does it with two; but if
streaming HD Video over Tor eventually becomes a thing, something this will
need to happen. :-)



> To be clear, I think this is
> fine for an innovative experiment like this, but it's not a setup that
> everyone on the network should replicate.


Concur. Only semi-retired enterprise architects with spare time need apply.
:-)

If you would like to talk to one of the 72 daemons, check out:

    http://jmlwiy2xu3lmrh66.onion/

...which is probably okay for the next 24h or so.


I guess to improve the
> situation here, we would need some sort of "onionbalance complex mode"
> where instead of uploading the intermediary descriptors to the HSDir
> system, we upload them to an internal master onionbalance node which
> does the intro point multiplexing.
>

Agreed, we can do that, and that's very efficient for localised clusters.

However, I had this idea the other evening*, which smells very "Tor" and
has some interesting properties.

1) Say that, instead of 72, we chose a more sensible number like "6" onion
addresses

2) We configure 6 cheap devices (Raspberry Pi?) each to have a single
"worker" onion address

3) We also configure OnionBalance on all 6 computers, so that they all know
about each others' onion addresses, plus the *same* master key; so we have
an n-squared mesh.

4) They get booted; each launches its own Worker onion, and each scrapes
the descriptors of all the other workers, synthesising a "master"
descriptor and publishing it once a day to the HSDirs.

5) This means that, for workers A B C D E F, occasionally the master
descriptor which B's onionbalance uploads to the HSDirs will get stomped-on
a few minutes later by the <same> from F, and then the <same> from D will
overwrite them, etc.

6) there is some (small?) extra load on the HSDirs this way - BUT the big
win is that to take this onion site "offline" will require killing all 6
daemons, all 6 machines - hence the "Horcrux" reference from Harry Potter.

7) this works because the 6 daemons use the HSDir as a source of truth
about the descriptors, which is an idea Donncha had for OnionBalance, and
is awesome, because it enables this kind of trusted distributed directory.

8) to make it forgery-proof as well, you'd want to use certificates, or
signing, or something; but this would be an intensely robust
High-Availability architecture.

I want to build one, for test/fun, but not until the bandwidth testing is
done.

    - alec

* Horcrux thread: https://twitter.com/AlecMuffett/status/810219913314992128

-- 
http://dropsafe.crypticide.com/aboutalecm
-- 
tor-talk mailing list - tor-talk@xxxxxxxxxxxxxxxxxxxx
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk