[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: Pseudonymity for tor: nym-0.1 (fwd)
---------- Forwarded message ----------
Date: Thu, 29 Sep 2005 23:32:24 +0000 (UTC)
From: Jason Holt <jason@xxxxxxxxxxxx>
To: Ian G <iang@xxxxxxxxxxxxx>
Cc: cryptography@xxxxxxxxxxxx
Subject: Re: Pseudonymity for tor: nym-0.1 (fwd)
On Thu, 29 Sep 2005, Ian G wrote:
Couple of points of clarification - you mean here
CA as certificate authority? Normally I've seen
"Mint" as the term of art for the "center" in a
blinded token issuing system, and I'm wondering
what the relationship here is ... is this something
in the 1990 paper?
Actually, it was just the closest paper at hand for what I was trying to do,
which is "nymous accounts", just as you say. So I probably shouldn't have
referred to "spending" at all.
My thinking is that if all Wikipedia is trying to do is enforce a low barrier
of pseudonymity (where we can shut off access to persons, based on a rough
assumption of scarce IPs or email addresses), a trivial blind signature system
should be easy to implement. No certs, no roles, no CRLs, just a simple
blindly issued token. And in fact it took me about 4 hours (while the
conversation on or-talk has been going on for several days...)
There are two problems with what I wrote. First, the original system is
intended for cash instead of pseudonymity, and thus leaves the spender a
disincentive to duplicate other serial numbers (since you'd just be accused of
double spending); this is a problem since if an attacker sees you use your
token, he can get the same token signed for himself and besmirch your nym. And
second, it would be a pain to glue my scripts into an existing authentication
system.
Both problems are overcome if, instead of a random token, the client blinds the
hash of an X.509 client cert. Then the returned signature gives you a complete
client cert you can plug into your web browser (and which web servers can
easily demand). Of course, you can put anything you want in the cert, since
the servers know that my CA only certifies 1 bit of data about users (namely,
that they only get one cert per scarce resource). But the public key (and
verification mechanisms built in to TLS) keeps abusers from being able to
pretend they're other users, since they won't have the users' private keys.
<rant>
The frustrating part about this is the same reason why I'm getting out of the
credential research business. People have solved this problem before (although
I didn't know of any Free solutions; ADDS and SOX are hard to google -- are
they Free?). I even came up with at least a proof of concept in an afternoon.
And yet the argument on the list went on and on, /without even an
acknowledgement of my solution/. Everybody just kept debating the definitions
of anonymity and identity, and accusing each other of anarchy and tyranny. We
go round and round when we talk about authentication systems, but never get off
the merry-go-round.
Contrast that with Debevec's work at Berkeley; Ph.D in 1996 on "virtual
cinematography", then The Matrix comes out in 1999 using his techniques and
revolutionizes action movies. Sure, graphics is easier because it doesn't
require everyone to agree on an /infrastructure/, but then, neither does the
tor/wikipedia problem. I'm grateful for guys like Roger Dingledine and Phil
Zimmerman who actually make a difference with a privacy system, but they seem
to be the exception, rather than the rule.
</rant>
So thanks for at least taking notice.
-J