[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

[tor-dev] Key revocation in Next Generation Hidden Services



To achieve offline key storage in the new HS design, hidden service
are using three layers of keys:

(Skip the next three paragraphs if you know this stuff)

Each hidden service has a "long-term master identity key". This is the
key that is encoded in its onion address.

Using the long-term identity key, the hidden service generates
"ephemeral blinded signing" keys according to #8106. This key lasts
for a short period of time (probably for a day or so; not decided
yet).

When the hidden service needs to make a new HS descriptor, it
generates a "descriptor signing keypair" and signs it with the blinded
signing key. It then includes in the descriptor the public part of the
descriptor signing key as well as its signature by the blinded signing
key. It finally signs the descriptor with the private part of the
descriptor signing key.

As described in the spec [0] this allows the HS to generate offline a
bunch of blinded signing keys and descriptor signing keys, copy them
to the online HS host, and let the host use those keys for the next
few days or so. As a result, an attacker that owns the online HS host
only gets access to the keys for a few days ahead. The attacker
doesn't get access to the long-term identity keys.

Of course, this doesn't solve the problem of how Hidden Services can
revoke compromised keys. Having the attacker impersonate the HS even
for a few days is not acceptable. Unfortunately, PKI revocation
solutions are always messy and don't work really well (look at SSL's
OCSP and CRLs).

The question becomes how the legitimate Hidden Service can inform a
client that its keys got compromised. A client that connects to a
Hidden Service first fetches the consensus from the directory system,
then fetches the HS descriptor from the HSDirs and finally connects to
the HS. This probably means that the client should be informed of the
compromise either by the directory system or by the HSDirs.

If we wanted to use the directory system, we could imagine some sort
of CRL scheme where hidden services notify the authorities about
compromised keys, then the authorities pass the CRL to the directory
servers, and the directory servers finally give the CRL to HS
clients. This system might be possible theoretically, but it poses
many engineering and security questions:
* Are users supposed to fetch the CRL everytime they connect to a HS?
  This might be dangerous identifying behavior.
* What happens if the CRL gets too big because the adversary fills it
  with fake revocations? The directory system can't handle big
  documents so there will need to be some sort of size limit (and old
  entries will need to be resetted frequently).
* The list of compromised keys suddenly becomes public
  information. This might not be a good idea.

On the other hand, if we wanted to use the HSDirs, we could imagine
the HS sending some sort of revocation message to the responsible
HSDirs so that they stop serving descriptors with compromised
keys. Unfortunately, this scheme treats HSDirs as trusted parties,
since they can simply ignore the revocation and continue passing the
evil descriptor to clients. We could decrease the chance of this
happening by implementing #8244 and also having the clients use
multiple HSDirs to fetch descriptors. This will force the attacker to
corrupt multiple HSDirs for the attack to succeed. Still the solution
is not very elegant.

What else should we consider?

For example (crazy ideas ahead) could we solve this problem more
elegantly by adding more layers of keys? Or maybe by adding yet
another network entity which handles revocations [like in Dan Boneh et
al. "A Method for Fast Revocation of Public Key Certificates and
Security Capabilities." paper (be aware of crazy crypto)]?

Or maybe we could look into certificate transparency; although
transparency for centralized PKIs (like SSL-CA) sounds like a good
idea, transparency for decentralized PKI systems with complex privacy
threat models might not be a good idea.

[0]: https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/224-rend-spec-ng.txt#l456
_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev