[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-talk] [monkeysphere] [Fwd: Why the Web of Trust Sucks]



Daniel Kahn Gillmor:
> [i'm not subscribed to tor-talk and can't afford the bandwidth to
> participate in another list, but i'd be happy to be Cc'ed on any
> responses to this thread]

Likewise, I am not on monkeysphere. Monkeysphere users: Please keep me
on Cc.

People on-list: Forgive the extensive quoting. Leaving as much as I
could in-place to avoid removing dkg's comments from the record.
 
> I agree with the author that the phrase "the web of trust" is terrible,
> but maybe we differ on other specific topics.  in the original message
> (and elsewhere on the internet), the term "WoT" is used to cover a lot
> of ground, and it might be worth teasing apart the different pieces.  In
> particular, the term appears to cover at least:
> 
>  * OpenPGP keyserver networks in general
>  * the SKS keyserver network
>  * any multi-issuer certification mechanism
>  * the "strong set" of keys on the public keyservers
>  * GnuPG's default mechanism for calculating key+userID validity given a
> set of OpenPGP certifications
>
> I don't like the term WoT because i don't like the term "trust", which i
> think is abused in almost any place it is used in technology.  Setting
> aside "trust", the public or semi-public data we're working with in each
> of the above specifics is a network of identity assertions.  networks of
> identity assertions have their drawbacks, but they also has features
> that i haven't seen offered by any other major cryptographic
> infrastructure.  I don't think we can afford to throw out the baby with
> the bathwater, as it were.

Ok, let's forget "trust". Let's use "key authentication" instead from
now on.
 
> The original message appears to have a few (commonly-held) mistaken
> ideas about how the network of OpenPGP identity assertions is used in
> practice.  I think these mistakes cause the author to overstate some of
> his claims.

My objections are limited to the fully decentralized "Everyone is a
certifier" model, when used in *any* form.

For purposes of simplifying our discussion, let's pretend for the
moment I am only concerned about authenticating the mapping of keys to
email addresses. This is based on the assumption that most users are
interested in authenticating the keys for people they wish to
communicate with, but that they have never met in person.

> > 1. It leaks information.
> 
> this is true.  this information leakage also offers a counterpoint,
> which is global auditability.  There are some identity assertions that i
> would not want to publish globally, some that are useless without global
> publication, and some that i don't really care one way or another.
> Using the OpenPGP keyservers to publish any of the first class would be
> a real mistake.

It is possible to have global auditability without a social graph, and
without everyone being a certifier. Notary-based certification systems are
quite capable of this. The append-only log utilized by Certificate
Transparency is the hot new example of such a system, but there are
other ways of accomplishing the same basic idea.

> > 2. It has many single points of failure.
> > 
> > Because by default GPG uses shortest-weighted paths to establish trust
> > in a key,
> 
> Unless the author is using the terms differently than i expect them to
> be used, this is not the case.  GnuPG does not "establish trust in a
> key" using shortest-weighted paths.  Rather, GnuPG considers any key
> that you have generated yourself on that instance "ultimately trusted",
> and it considers no other keys trusted unless you explicitly declare
> them trusted.  ("gpg --edit-key $KEYID trust")

If we can't automate the key authentication system, it will fail due to
user error in the overwhelming majority of cases.

> > Edward's GPG client has trust in a couple keys. It turns out that one of
> > his trusted keys, Bruce, has full trust in Roger's key (the compromised
> > key).
> 
> These trust statements are not typically published.  If they're
> published in machine-readable form, they're "trust signatures", which
> are *not* made by default by any tool i'm aware of, and i don't know of
> anyone who recommends making them.  While they're not as vanishingly
> rare in the wild as i would have hoped (more details to come about that
> from me eventually), they are really unlikely in general.  If Bruce
> isn't making public trust assertions about Roger's key, then Edward
> won't be compromised by it even if he trusts Bruce about Roger's identity.

My point was that hypothetical Bruce was publishing his authentication
signature in Roger's key, specifically so that people could use his
signature to verify Roger's key, and other keys that Roger certifies.

This transitive model is where the system breaks down. And yet without a
replacement or auxiliary mechanism, it does not scale beyond people who
have already met in person.
 
> > This scenario is possible against arbitrary keys using any of the high
> > degree keys in the Strong Set. They effectively function as single point
> > of failure CAs for the Web of Trust, which destroy its utility as an
> > independent key authentication mechanism.
> 
> This is only true if you're willing to fully trust arbitrary keys in the
> strong set.  By default, GnuPG does not do that; you have to explicitly
> make this bad decision.  OpenPGP itself (including the implementation
> present in GnuPG) provides a mechanism for people to assign levels of
> trust less than "full", which effectively means "i am willing to rely on
> corroborated certifications, but not single certifications.  This
> approach is designed specifically to counteract the possibility of a
> "single point of failure", when used sensibly.

My interpretation of you here that "If the user lowers the trust level
of keys they import and/or tweaks other settings, then the adversary
would have to compromise more than just one key before enough paths
could be generated to meet the user's desired level of authentication."

This is not very comforting. It would still seem to be the case that the
adversary gets their choice of compromising the weakest keys out of a
pool of thousands (or hundreds of millions, if the system were capable
of supporting Internet-scale key authentication).

> By contrast, the X.509 infrastructure everyone uses today:
> 
>  * by default, "trusts" dozens of root authorities
>  * "trusts" those authorities to delegate further authorities, to some
> crazily arbitrary level of depth
>  * provides no mechanism for authorities to directly (or even
> indirectly) corroborate others' certifications, or for users to demand
> such corroboration

I am not arguing in favor of the CA model. Everyone knows that it sucks
too, I hope.

Here, I'm arguing against any certification system that requires a
social and/or meatspace meetup graph.

> > 3. It doesn't scale very well to the global population.
> > 
> > The amount of storage to maintain the Web of Trust for the whole world
> > would be immense. For the level of authentication it provides, it just
> > doesn't make sense to have this much storage involved.
> 
> I'm not convinced by this argument without seeing concrete figures for
> what the storage costs are.  The Certificate Transparency proposal
> ("CT") suggests that such a thing is doable at global scale for all
> X.509 certificates issued, which is pretty large scale.  the SKS
> keyserver network seems to be coping fine with the recent post-Snowden
> uptick in key creation/activity.

Ok, I will grant you that we can store the data. In fact, if it is
possible, there are way more interesting things we can do with that
storage than merely record the meatspace interactions of the users of
the system (which I'll get to later).
 
However, as the system scales though, the percentage of the keyspace the
adversary has to compromise necessarily goes down, unless the software
increases the number of signature paths required for authentication in
proportion to the number of total users.

In other words, as the system scales, the adversary still only has to
compromise enough keys to make a few signature paths, where as the
number of keys available to compromise has been growing considerably.

> > 1. Every time GPG downloads a new key, re-download it several times via
> > multiple Tor circuits to ensure you always get the same key.
> 
> That seems plausible.  It also leaks your key search activity multiple
> times, if you care about that sort of leakage.
> 
> > 2. Every time I verify a signature from a key sent to an email address
> > that is not mine (like a mailinglist), my mail client adds a tiny amount
> > of trust to that key (since each new public email+signature downloaded
> > represents an observation of the key via a potentially distinct network
> > path that should also be observed by multiple people, including the
> > sender).
> 
> i don't think "trust" is the term you're looking for here, at least not
> in the GnuPG context.  i think you're talking about a historical record
> that can be used to assess the validity of the user ID of each key.
> 
> I think this would be a really useful project to work on, though the
> nuances are subtle and not everyone would make the same tradeoffs.  I
> think it would be
> 
> > 3. Every time I am about to encrypt mail to a key, check the key servers
> > for that email address, download the key, and make sure it is still the
> > same (SSH/TOFU-style).
> 
> This is sort of the opposite of TOFU -- Where TOFU would trust the first
> key you've ever seen for an address, you're asking your MUA to fetch
> new/updated keys each time, and maybe prefer the most recent keys or
> something.
> 
> Regular keyring refreshes are a critical part of any use of OpenPGP,
> since the keyservers provide standard revocation and update
> infrastructure (among other useful features).
> 
> Also, note that real-time key refreshes upon every use leak a not
> insignificant amount of activity metadata to the keyservers and to
> anyone capable of monitoring the network path between the OpenPGP client
> and the keyservers.  This might not be
> 
> > 4. When downloading a key, GPG could verify that the same email to key
> > mapping exists on multiple key servers, with each key server
> > authenticated by an independent TLS key that is stored in the GPG source
> > code or packaging itself. (Perspectives/notary-style cryptographic
> > multipath authentication).
> 
> i think using the keyservers themselves to verify user IDs on keys is a
> bad idea in general.
> 
> Keyservers are not identity authorities, they are a identity assertion
> publication spaces.  If you're looking for cryptographic verification of
> a user ID on a key, you should be looking at the OpenPGP certifications
> on that key, *not* to any particular keyservers.  By all means, use HKPS
> to verify and integrity-check the channels to your public keyservers
> (and/or use tor hidden services for keyservers, like
> qdigse2yzvuglcix.onion).

I am interested in alternate OpenPGP certification mechanisms. I admit
I'm not familiar with them, and I'm glad Isis chimed in with several
esoteric options. It is possible that the protocols to support what I
want already exist.

Here's another related option that may already be possible:

5. A "Certifying Keyserver" verifies that someone has control of both a
key and an email address. Such a key server could verify this fact by
requiring the user to respond with a signed email to an emailed nonce
encrypted to their submitted key.

This keyserver would have its own keys, and could publish its list of
authentication statements in an append-only, auditable, mirrored log
(Certificate Transparency style).

More than one certifier and audit log could exist, to provide redundancy
against compromise of the certifier itself.

> The use of multiple keyservers or paths to keyservers is useful, but it
> is only useful specifically to get around an attempt at denial of access
> to relevant data by an active network attacker.  This still isn't a
> complete solution, unless you are willing to "fail closed" upon being
> unable to reach the keyserver(s) in question.  Given our experience with
> OCSP implementations, i doubt many clients are going to be willing to
> make that tradeoff.

It depends on the mirroring mechanisms available, actually. If the log
is widely mirrored, the user needs only to find a way to reach an active
mirror.

> > ** The Web of Trust is technically capable of multipath authentication
> > by itself, but only if you are aware of all of the multiple paths that
> > *should* exist. Unfortunately, nothing authenticates the whole Web of
> > Trust in its entirety, so it is impossible to use it to reliably verify
> > that multiple paths to a key do actually exist and are valid.
> 
> i don't understand this footnote; Iâve never seen anyone claim that "the
> web of trust in its entirety" should be authenticated -- i don't even
> know what that means.  can you clarify?

Basically, for the current signature-based authentication mechanism to
work, the user needs to be sure that when they download a key and a set
of signature paths, they are seeing all of the available paths, rather
than a subset chosen/created by the adversary.


-- 
Mike Perry

Attachment: signature.asc
Description: Digital signature

-- 
tor-talk mailing list - tor-talk@xxxxxxxxxxxxxxxxxxxx
To unsusbscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk