[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [freehaven-dev] plausible deniability

On Fri, Oct 27, 2000 at 03:31:23AM -0400, Roger Dingledine wrote:
> So having a share doesn't mean you have a piece of The Bad Document,
> because that share is also a piece of 18 other documents, most of
> them good.

Clarification, to maybe spin this a bit better:

Let's say there's a system in place that has a given share A, and this
share is part of maybe 400 other documents which are 'good' (legal,
morally pleasant, etc).
So A happens to live on Alice's server, and Alice is happily serving
A to people who ask for it.

Now Mallory comes along with her evil document M, and calculates 
B = A xor M. At what point does share A become evil? Alice has no idea
that Mallory has done anything at all.

Does A become evil if Mallory doesn't publish B? I'd guess not. What
if she publishes B in the same system? I'd guess "maybe". What if she
publishes B in a different system? What if she publishes B and then
unpublishes it later? Is A still evil after the unpublishing?

> So the first question is: does this scheme somehow provide 'more'
> deniability than the schemes where you have a pile of bits (perhaps
> encrypted as in freenet or otherwise obfuscated so it's tough to identify
> it directly) and you respond to queries for a document? I think it
> might: since the client is the one requesting the shares and doing the
> reconstruction, the server does not know which uri is being requested. It
> simply serves the share for all the different documents that use it.
> So the second question is: can we make this less brittle? I think the

The obvious answer to this is to publish each share as a separate
document in a service like Freenet or Free Haven. These systems already
have accountability and robustness mechanisms in place.

The beauty of this is that the xor technique overlaps transparently on
top of any of these services: it can be done completely externally.

Still pondering,