[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-dev] Hidden Service Scaling



El 02/05/14 02:34, Christopher Baines escribió:
On 02/05/14 00:45, waldo wrote:
El 30/04/14 17:06, Christopher Baines escribió:
On 08/10/13 06:52, Christopher Baines wrote:
I have been looking at doing some work on Tor as part of my degree, and
more specifically, looking at Hidden Services. One of the issues where I
believe I might be able to make some progress, is the Hidden Service
Scaling issue as described here [1].

So, before I start trying to implement a prototype, I thought I would
set out my ideas here to check they are reasonable (I have also been
discussing this a bit on #tor-dev). The goal of this is two fold,  to
reduce the probability of failure of a hidden service and to increase
hidden service scalability.
Previous threads on this subject:
  https://lists.torproject.org/pipermail/tor-dev/2013-October/005556.html
  https://lists.torproject.org/pipermail/tor-dev/2013-October/005674.html

I have now implemented a prototype, for one possible design of how to
allow distribution in hidden services. While developing this, I also
made some modifications to chutney to allow for the tests I wanted to
write.

In short, I modified tor such that:
  - The services public key is used in the connection to introduction
points (a return to the state as of the v0 descriptor)
  - multiple connections for one service to an introduction point is
allowed (previously, existing were closed)
  - tor will check for a descriptor when it needs to establish all of its
introduction points, and connect to the ones in the descriptor (if it is
available)
  - Use a approach similar to the selection of the HSDir's for the
selection of new introduction points (instead of a random selection)an
taack involvin

        
  - Attempt to reconnect to an introduction point, if the connection
is lost
I appreciate your work since Hidden services are really bad. Hard to
reach ATM sometimes. But ... how you do this in details? Sorry but
walking over your sources could be challenging if I don't know the
original codebase you used and is gonna take more time than if I just
ask you. I also can't test as I don't have enough resources/know how/time.
In terms of the code, just when the circuit to an introduction point has
failed, try to establish another one. I am unsure if I have taken the
best approach in terms of code, but it does seem to work.

 I am worried about an attack coming from evil IP based on forced
disconnection of the HS from the IP. I don't know if this is possible
but I am worried that if you pick a new circuit randomly could be highly
problematic. Lets say I am NSA and I own 10% of the routers and
disconnecting your HS from an IP I control, if you select a new circuit
randomly, even if the probabilities are low, eventually is a matters of
time until I force you to use an specific circuit from those convenient
to me in order to have a possible circuit(out of many) that transfers
your original IP as metadata through cooperative routers I own and then
do away with the anonymity of the hidden service.
Yeah, that does appear plausible. Are there guard nodes used for hidden
service circuits (I forget)?
No idea, according to this docs https://www.torproject.org/docs/hidden-services.html.en there aren't guards in the circuits to the IP in step one(not mentioned). They are definitely used on step five to protect against a timing attack with a corrupt entry node.

 Even if they are used, I still see some problems. I mean it looks convenient to try to reconnect to the same IP but in real life you are going to find nodes that fail a lot so if you picked an IP that has bad connectivity reconnecting to it is not gonna contribute at all with the HS scalability or availability of your HS, on the contrary.

 Maybe a good idea would be to try to reconnect and if it is failing too much select another IP.

If the IP is doing it on purpose the HS Is going to go away so the control the IP has disconnecting your HS is capped for any attack known or unknown. If is not on purpose the HS goes throwing away failing nodes until it picks a good node as IP. I think it would cause over time, the tor network re-balance/readapt to new conditions itself. For instance in the case some IP is overloaded (maybe by DoS) causes the HS to go away from the IP.

I would also rotate the IPs after using them some time. I don't think is good to have one IP for too long. Doesn't sounds good to me. If for instance I am big daddy and know your IPs I could go there seize the computers and start gathering funny statistics about your HS. Or simply censor your HS by dropping messages from clients trying to send you the rendezvous point (is this possible? looks like it is if I drop introduce messages and generate fake ones). You wouldn't even know cause I can keep your connected and receiving fake connections. Only maybe if you try to check the IP by trying to send a rendezvous point from your HS to your HS (this IP quality test would be great if tor would do it periodically). I somehow do it myself manually  when I notice the HS is superhard to reach. Sometimes it works great, sometimes even being turned on the server and online, is not visible. So you have to take down tor and restart it and wait again for a while.

I was thinking maybe you could select new ones and inform HSDirs about the change and after the new ones are known end circuits to the previous IPs and with that avoid the overhead of the rotation.

 I would rebuild circuits to the IP from time to time (originating from the HS). Multiple connections to the same IP would permit to do this better since I can make a new one and afterwards kill a previous circuit remaining connected all the time.

In some previous messages about the subject I saw that HSDirs provide all the HS IPs. I don't like this way of doing things since let's say I have 6 IPs to my HS available to everyone. To cause a DoS to your HS seems to me all I have to do is cause a DoS to the IPs. And there is no need for everyone to know all the IPs of one HS all the time. All one user needs to connect is just some maybe for redundancy but not all.

Is there some way to only provide part of the IPs of one HS to one user? Avoid enumeration? Maybe distribute partial information to HSDirs? Don't know, just thinking. Maybe "abuse" some caching effect on HSDirs and publish partial IP information on one end and partial in another end that only reaches all users in entirety over time.


 The big question I have is what is the probability with current Tor
network size of this happening? If things are like I describe, is a
matter of seconds or thousand of years?
I am unsure. I implemented this, as it was quite probable when testing
with a small network using chutney. When testing the behaviour of the
network when an introduction point fails, you need to have reconnection,
otherwise instances which connect to other introduction points through
that failed introduction point, will also see those working introduction
points as failing. Leading to the instances using different introduction
points (what I was trying to avoid).




_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev