[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freehaven-dev] plausible deniability



Hal Finney posted to the mojonation list yesterday reminding me of an
interesting problem:

>Unfortunately, none of the services (to my knowledge) provide plausible
>deniability to server operators with respect to the question of whether
>they hold a particular piece of contraband data (identified by a URL
>or equivalent).  For efficiency reasons, a node has to know whether it
>is able to respond to a given request, and this same knowledge allows
>it to be coerced into refusing to respond to certain requests.
>
>Theoretically there are multi-party protocols whereby multiple nodes would
>work together to satisfy a request, with each individual node unable to
>tell which part, if any, it had played in providing the requested data.
>However such a system would be vastly more inefficient and costly than
>simply asking a node for the data.

This reminds me of a solution we kicked around a while ago. In a
simplified scenario,

There are shares out on the network, let's call them 1,2,...100. When
you want to publish a new document, you pick some subset of the shares
already present, grab them, xor them together, xor your document, and
you end up with a new share which you publish. Thus the "uri" is the
list of shares which when xor'ed together give you your document.

Thus having one of these shares on your participating server really
doesn't implicate you in anything at all. They're just bits, ya know?
So having a share doesn't mean you have a piece of The Bad Document,
because that share is also a piece of 18 other documents, most of
them good.

So the first question is: does this scheme somehow provide 'more'
deniability than the schemes where you have a pile of bits (perhaps
encrypted as in freenet or otherwise obfuscated so it's tough to identify
it directly) and you respond to queries for a document? I think it
might: since the client is the one requesting the shares and doing the
reconstruction, the server does not know which uri is being requested. It
simply serves the share for all the different documents that use it.

So the second question is: can we make this less brittle? I think the
answer is yes. There are relatively simple techniques like "shares come
in triplets, where any two xor to the third" that could allow for some
loss of shares. (There might be some IDA-like approach to making it more
efficient in terms of space.) We handle large files by working on them
in 64k (or whatever) chunks. I imagine there are other approaches that
would work if I thought some more.

It's late and I've been writing all day, so this might be totally
off-base. And I have the nagging feeling that somebody else has already
implemented this somewhere, and I don't remember where. But I wanted to
get it out there before I moved on to thinking about something else.

--Roger