[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Encrypting content of hidden service descriptors



Hi Karsten,

Great to see your project starting up!

First, I think you have gotten the first paragraph the wrong way. The
key is simply a hash of the .onion address (but combined with cookies,
time stamps and more [1]). The .onion address is the only known
parameter to the hidden service's users and thus can be verified later
when the public key is to be confirmed. The .onion address is not even
known to the directory server.

The problem you are describing is mentioned in the paper, and is related
to "the storage problem". Meaning that there is no easy way to let
anyone store an encrypted (secret) string without letting others store
nonsense (or whatever they want). Our proposed update scheme uses a
reverse hash chain, and as you point out, neglected updates are a
problem if the directory server allows non-updated entries to time out
in a short amount of time (within a long? stop of service availability).
This vulnerability can be reduced by allowing longer storage of
descriptors, and the included added storage problems...

You are describing a problem that has its own limitations though. If the
service really is to be hidden from everyone except the users as shown
in the paper, then you have already lost if someone attacks the
descriptor. Because the .onion address is supposed to be known only to
the (honest) users of the service. If the users are bad we propose a way
to distribute new .onion addresses which could be a way to go for the
timeout problem as well.

But the storage problem is still real, and might be addressed by using
puzzles or other schemes to allow for long term storage in the DHT/DS? I
have no answer to this (now:), but maybe someone else has?

Hope this answers some of your initial concerns even if the most
important one is still open.

Good luck on the summer of coding!

 - Lasse


[1] L. Øverlier and P. Syverson. Valet services: Improving hidden
servers with a personal touch. In Proceedings of the Sixth Workshop on
Privacy Enhancing Technologies (PET 2006), Cambridge, UK, June 2006.
Springer.


Karsten Loesing wrote:
> There have been proposals around to encrypt the content of hidden
> service descriptors, so that only authorized users can decrypt them. The
> decryption key should have been given to authorized clients only. The
> onion address could then be calculated as a secure hash of this key.
> 
> Two reasons were given to do this: (1) An adversary (including an
> untrustworthy directory server) who does not know the decryption key
> cannot determine the introduction points included in the descriptor and
> thus not perform an attack on them. (2) An adversary (no directory
> server) cannot derive the onion address and therefore cannot conclude a
> hidden service's online activity.
> 
> I think that this was originally proposed by Øverlier and Syverson in
> their Valet node paper and was written to the Tor homepage as a possible
> extension.
> 
> I am not sure, but perhaps I have found a flaw in this scheme:
> 
> The encrypted descriptor looks like random data to someone without the
> decryption key and thus cannot be validated by a directory server when
> storing it. What if a hidden service is not available for some time and
> does not renew a previously stored descriptor? Someone else could store
> some non-sense data as descriptor for the hidden service of which she
> knows the onion address. How can the directory deny storage of this
> descriptor without validating its origin? If the hidden service wants to
> store the real descriptor, how can the directory decide whether to
> overwrite the old descriptor or not? If the directory stores all
> descriptors, what prevents someone to store a million non-sense
> descriptors before the real hidden service stores the real descriptor?
> 
> On the other hand, if encryption is combined with a signature that
> everyone can verify, this discloses the hidden service's online activity
> to the public, doesn't it? This would break reason (2) mentioned above.
> 
> What do you think? Did I miss some important point?