[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: nym-0.2 released (fwd)




On Sun, 2 Oct 2005, cyphrpunk wrote:
1. Limting token requests by IP doesn't work in today's internet. Most

Hopeless negativism. I limit by IP because that's what Wikipedia is already doing. Sure, hashcash would be easy to add, and I looked into it just last night. Of course, as several have observed, hashcash also leads to whack-a-mole problems, and the abuser doesn't even have to be savvy enough to change IPs.


Why aren't digital credential systems more widespread? As has been suggested here and elsewhere at great length, it takes too much infrastructure. It's too easy when writing a security paper to call swaths of CAs into existance with the stroke of the pen. To assume that any moment now, people will start carrying around digital driver's licenses and social security cards (issued in the researcher's pet format), which they'll be happy to show the local library in exchange for a digital library card.

That's why I'm so optimistic about nym. A reasonable number of Tor users, a technically inclined group of people on average, want to access a single major site. That site isn't selling ICBMs; they mostly want people to have access anyway. They have an imperfect rationing system based on IPs. The resource is cheap, the policy is simple, and the user needs to conceal a single attribute about herself. There's a simple mathematical solution that yields certificates which are already supported by existing software. That, my friend, is a problem we can solve.


I suggest a proof of work system a la hashcash. You don't have to use
that directly, just require the token request to be accompanied by a
value whose sha1 hash starts with say 32 bits of zeros (and record
those to avoid reuse).

I like the idea of requiring combinations of scarce resources. It's definitely on the wishlist for future releases. Captchas could be integrated as well.



2. The token reuse detection in signcert.cgi is flawed. Leading zeros
can be added to r which will cause it to miss the saved value in the
database, while still producing the same rbinary value and so allowing
a token to be reused arbitrarily many times.

Thanks for pointing that out! Shouldn't be hard to fix.


3. signer.cgi attempts to test that the value being signed is > 2^512.
This test is ineffective because the client is blinding his values. He
can get a signature on, say, the value 2, and you can't stop him.

4. Your token construction, sign(sha1(r)), is weak. sha1(r) is only
160 bits which could allow a smooth-value attack. This involves
getting signatures on all the small primes up to some limit k, then
looking for an r such that sha1(r) factors over those small primes
(i.e. is k-smooth). For k = 2^14 this requires getting less than 2000
signatures on small primes, and then approximately one in 2^40 160-bit
values will be smooth. With a few thousand more signatures the work
value drops even lower.

Oh, I think I see. The k-smooth sha1(r) values then become "bonus" tokens, so we use a large enough h() that the result is too hard to factor (or, I suppose we could make the client present properly PKCS padded preimages). I'll do some more reading, but I think that makes sense. Thanks!


						-J