[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freehaven-dev] Subliminal proofs of unknown work?





In a proof of work protocol, a client proves to a server that it has
invested a certain amount of resources by providing some "certificate of
wasted resources" to the server. For instance, this may take the form
of finding truncated hash collisions, as in hashcash. Another proof       
of work might consist of solving a Rivest-Shamir-Wagner puzzle; this
has the advantage of being hard to parallelize.

(Note that both of these are certificates of wasted time; are there
other resources for which you can provide succint certificates
that you invested/burned a lot of the resource? for instance, a problem
which requires a lot of space to solve as well as a lot of time?)

Proofs of work have been proposed as a means of moderating resource
consumption [1, 2]. Roger's question is whether the work done in such
proofs can be re-used for some constructive purpose. The answer to this
seems to be in the affirmative, thanks to the Jakobsson-Juels paper[3].

Next question : is there a way to construct proofs of work in
which the client has no idea what kind of work it is doing?

Next question : is it possible to take a proof of work protocol and modify
it so that neither client nor server have any idea what kind of work 
is being performed, but a third party can extract useful information from
the completed proofs of work? (and what would we call this, anyway? a
"proof of unknown work", an "oblivious proof of work", what?)

Next questions : can this be done in such a way that the server has no
idea that it has no idea what kind of work is being done? can we imbed
"hidden" or "subliminal" proofs of work into an existing proof of work
protocol? (for now I'll call it a "subliminal proof of work").

There seems to be a connection to "cryptoviroloy" and "kleptography"
here[4]. Suppose that proofs of work catch on as a means of resource
rationing (for instance, RSADSI includes a puzzle protocol with all copies
of B-SAFE or some other popular toolkit). Now a virus invades the server
and replaces the proof of work code with one which has an imbedded
subliminal proof of work. Now even if the server inspects itself or its
operators inspect the server, it will be unable to tell from the received
proofs of work that anything is wrong; it will be necessary instead to
notice that the code has been changed.  In the meantime, the subliminal
proof of work is being transmitted somewhere and used for some
application...

A possible application might be to launch an attack on a MicroMint[8]
scheme.  Another possible application might be to do distributed
keycracking.  Neither application requires such subliminal proofs (in
fact, people might be happy to participate in a MicroMint cracking scheme
if they received a portion of the proceeds), but this approach would
conceal the existence and extent of such an attack.

Note that you could still use computation aimed at breaking a MicroMint
scheme as a proof of work. All this subliminal stuff is simply so the
clients and servers can't look at the proofs of work going back and forth,
then discover what's happening.

In addition, in a system where people are broadcasting proofs of work to
everyone, such as in Wei Dai's b-money[5], you might be able to use
subliminal proofs of work to "trade computation" without anyone knowing.
Say if there were a tax regime on b-money, and you wanted to create an
underground market. Well, maybe not (I'm not sure what a tax on b-money
would look like).

Right, so this would be a nice thing to do. but as my wrestling coach used
to say "If wishes and buts were horses and nuts, oh but we'd have a
party."  We need some constructions.  Ideal would be a way to create a
hidden or subliminal proof of work for any arbitrary proof of work, not
necessarily of the same problem -- make a client think it is doing an
instance of problem A, when it is in fact working on an instance of
problem B, and the client can't determine what B is or the problem
instance from the work it is doing.

Let's start with something straightforward : computing g^x mod p as a
proof of wasted time.  A client is given (g, x, p), and told to come up
with g^x mod p.  Ignore for the moment that this is *not* a very good
proof of work, because the server doesn't have any obvious fast way to
check it correct, short of computing g^x mod p itself (there is a program
checker for modular exponentiation due to Adleman and Kompella[7], but I
don't understand it well enough right now to comment on its utility for a
proof of work).

Can the server hide the value of g^x it "really" wants from the client?
Yes, by picking r randomly in Z_p^* and giving the client (g, xr, p),
instead of (g,x,p). The client has no way to know what x is now, but it
computes g^{xr} anyway. The server can take the r'th root and obtain g^x.
This has the drawback of being likely more expensive than just computing
g^x directly.

Alternatively, give the client (g, x + r, p), and give some other client
(g, -r, p). Then you get g^{x + r} and g^{-r} from the two clients and
multiply them.  This is only O(lg^2 p) steps instead of the O(lg^3 p)
required This has the drawback that if the clients collaborate, then x is
exposed.

A third party can use two non-collaborating compromised servers to compute
g^x such that neither one knows what g^x is. This works by making one
server give out challenges of the form (g, x+r, p), and another server
give out challenges of the form (g, -r, p). In addition, both triples
"look random" by themselves; no one looking at either alone will notice
that they've been cooked up. (Someone looking at the server's code, on the
other hand, would notice something...)

We can get these weak results for g^x mod p because this is a randomly
self reducible problem, and so it's relatively easy to "hide instances."
Same idea as blinding signatures.

Unfortunately, these approaches seem to break down when we move to
something like SHA-1 or a block-cipher based hash function. Maybe you
could build a more general construction based on something like
cryptocomputing[6]; obfuscators would have worked, but they were recently
shown not to exist.

Interestingly, Jakobsson and Juels note in their paper that the proof of
work protocol they use for MicroMint is "information hiding" -- it does
not reveal the secret value r used to key the hash function for each
month's coins. The client performing the proof of work knows that it is
working for MicroMint, but doesn't learn what this month's hash function
really is. The intended application for this is so an adversary posing as
a client cannot learn r and start trying to mint new coins before r is
officially released.

At first it seemed that after r is released, the adversary could use the
same protocol to give out proofs of work where the clients don't know what
they're doing; but this doesn't work because if a client guesses the r
value, it can check in this scheme to see if it's trying to find
collisions for that particular r (see p.12 of the paper; a client can
compute y = h(r || i) itself once r is known). Then the client can detect
if it's being used to mint this month's coins illicitly and sound an
alarm.

any other uses for these things? ideas for constructions?

-David  


[1] Adam Back, hashcash, http://www.cypherspace.org/~adam/hashcash/

[2] A. Juels and J. Brainard. Client Puzzles: A Cryptographic Defense
Against Connection        Depletion Attacks. NDSS '99.
http://www.rsasecurity.com/rsalabs/staff/ajuels/papers/clientpuzzles.ps

[3] M. Jakobsson and A. Juels. Proofs of Work and Bread Pudding Protocols.
CMS '99
    http://www.rsasecurity.com/rsalabs/staff/ajuels/papers/breadpudding.ps

[4] A. Young, M. Yung, "Cryptovirology: Extortion-Based Security Threats
and
    Countermeasures", Proceedings of the 1996 IEEE Symposium on Security
and
    Privacy, pp. 129-140, May 6-8, IEEE Computer Society Press, 1996.
    http://www.cs.columbia.edu/~ayoung/ieee96.ps
    see also http://www.cs.columbia.edu/~ayoung/papers.html

[5] Wei Dai, b-money, http://www.eskimo.com/~weidai/bmoney.txt

[6] Tomas Sander, Adam Young, and Moti Yung "Non-Interactive
CryptoComputing for NC^1"
    FOCS '99
http://www.computer.org/proceedings/focs/0409/04090554abs.htm

[7] Lenoard Adleman and Kireeti Kompella "Fast Checkers for Cryptography"
    CRYPTO '90 temporarily up at
http://www.hcs.harvard.edu/~dmolnar/checker.pdf

[8] Ronald Rivest and Adi Shamir "PayWord and MicroMint : Two Simple
Micropayment Schemes"
    RSA '96 http://heavenly.nj.nec.com/rivest-payword.html
    (theory.lcs.mit.edu not responding to http traffic right now?)