[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-dev] Hidden Service Scaling



On 08/10/13 23:41, Nick Mathewson wrote:
> Here are some possible desirable things. I don't know if they're all
> important, or all worth it. Let's discuss!

So, I think it makes more sense to cover the goals first.

>   Goal 1) Obscure number of hidden service instances.

Good to have, as it probably helps with the anonymity of a hidden
service. This is a guess based on the assumption that attacks based on
traffic analysis are harder if you don't know the number of servers that
you are looking for.

>   Goal 2) No "master" hidden service instance.
>   Goal 3) If there is a "master" hidden service instance, clean
> fail-over from one master to the next, undetectable by the network.

That sounds reasonable.

>   Goal 4) Obscure which instances are up and which are down.

I think it would be good to make the failure of a hidden service server
(perhaps alone, or one of many for that service) indistinguishable from
a breakage in any of the relays. If you don't have this property,
distributing the service does little to help with attacks based on
correlating server downtime with public events (power outages, network
outages,  ...). This is a specific form of this goal, that applies if
you are in communication with a instance that goes down.

> What other goals should we have in this kind of design?

   Goal 5) It should cope (all the goals hold) with taking down (planned
downtime), and bringing up instances.

   Goal 6) Adding instances should not reduce the performance.

I can see problems if you have a large powerful server, adding a smaller
server could actually reduce the performance, if the load is distributed
equally.

   Goal 7) It should be possible to move between a single instance and
mutiple instance service easily. (this might be a specific case of goal
5, or just need consolidating)

> Alternative 1:  Multiple hidden service descriptors.
> 
> Each instance of a hidden service picks its own introduction points,
> and uploads a separate hidden service descriptor to a subset of the
> HSDir nodes handling that service.

This is close to breaking goal 1, as each instance would have to have >=
1 introduction point, this puts a upper bound on the number of
instances. The way the OP picks the number of introduction points to
create would have to be thought about with respect to this.

Also, goal 4 could be broken, as if the service becomes unreachable
through a subset of the introduction points, this probably means that
one or more of the instances have gone down. (assuming that an attacker
can discover all the introduction points?)

> Alternative 2: Combined hidden service descriptors in the network.
> 
> Each instance of a hidden service picks its own introduction points,
> and uploads something to every appropriate HSDir node.  The HSDir
> nodes combine those somethings, somehow, into a hidden service
> descriptor.

Same problem with goal 4 as alternative 1. Probably also has problems
obscuring the number of instances from the HSDir's.

> Alternative 3: Single hidden service descriptor, one service instance
> per intro point.
> 
> Each instance of a hidden service picks its introduction points, and
> somehow they coordinate so that they, together, get a single unified
> list of all their introduction points. They use this list to make a
> single signed hidden service descriptor, and upload that to the
> appropriate HSDirs.

Same problem with goal 4 as alternative 1. I don't believe this has the
same problem with the number of instances as Alternative 3 though.

> Alternative 4: Single hidden service descriptor, multiple service
> instances per intro point.
> 
> This is your design above, where there's one descriptor chosen by a
> single hidden service instance (or possibly made collaboratively?),
> and the rest of the service instances fetch it, learn which intro
> points they're supposed to be at, and parasitically establish fallback
> introduction circuits there.

I don't really see how choosing introduction points collaboratively
would work, as it could lead to a separation between single instance
services, and multiple instance services, which could break goal 7. It
would also require the instances to interact, which adds some complexity.

As for the fallback circuits, they are probably better off being just
circuits. This would be what provides the scaling. The way you do this
would have to be thought out though, to avoid breaking goal 6.

A simple algorithm would be for the introduction point to just use a
round robin for all the circuits to that service, but allow a service to
reject a connection (if it has two much load), the introduction point
would then continue to the next circuit.

The introduction would also know the number of instances, if each
instance only connected once. This could be masked by having instances
making multiple connections to each introduction point (both in one
instance and multiple instance services).

While an external attacker might not be able to detect individual
instance failure by trying to continuously connect through all the
introduction points, the introduction points themselves would probably
be able to work out if one or more instances just failed. To combat
this, the service could inject random failures (some kind of
non-response which would be given if the service had actually failed) in
to some of the circuits, to keep the introduction point guessing. This
hopefully would not have too much detrimental effect, as the
introduction point would just try the next circuit.

> There are probably other alternatives too; let's see if we can think
> of some more.

I believe that to mask the state and possibly number of instances, you
would have to at least have some of the introduction points connecting
to multiple instances.

You could also have some random or coordinated shuffling of the
connections, such that the instance(s) behind each introduction point
keeps changing (this might address the above concerns).

Combining this with having multiple circuits to introduction points
(from the same instance), and random failures (to hide real failures),
might give the required level of security.

I will try and develop some full alternatives, once I have had some
sleep... I realise that I have only commented negatively regarding the
alternatives that you gave, but thanks enormously for talking to me
about this, as it has really helped me.

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev