[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freehaven-dev] another look at free haven accountability

We should apply what we learned from writing 50 pages about accountability
to come up with a revised solution for accountability in Free Haven.

Let me try to very-briefly summarize the situation here. Somebody please
correct or elaborate on anything you feel needs it. This will not be as
fully-thought-out as I'd like it to be, but then this is just the first
round of brainstorming. Let's have at it. :)

Our main priorities are to provide
* anonymity on all facets
* reliable storage
* content-neutral storage

This means that we can't do things that would risk revealing identity,
and we can't do things that create attackable bottlenecks, and we can't
do things that compromise the content-neutral aspect. Clients must be
anonymous; we're willing to let servers be pseudonymous, but it would
be nice if we could make them fully (or at least more) anonymous in
the future.  What I mean by this last phrase is that though we might
leave them with a keypair that identifies them, I would be very happy
to move away from the 'reply block' addressing mechanism if we can.
(Alternatives include the meeting place notion, where servers actively
go somewhere (eg over Freedom net) and have a temporary address for
that session.)

So with the current Free Haven design, we have shares that migrate around,
and we don't want to put the accountability burden on the publisher. I
guess I'm looking at accountability from several perspectives:
(when we're sanity-checking the accountability draft, we should see if
we adequately address all of these items)

1. we need to make sure people don't flood the system with data
2. we need to make sure people don't flood the system with requests
3. we need to make sure people don't drop data early

Replace 'people' with 'too many people' in each instance above -- some
abuse is ok as long as it doesn't cause loss of documents or 'too much'
degradation of service.

We need to provide these aspects of accountability while also providing
everything else we want to provide. Efficiency is conspicuously missing
from our requirements list, since this problem is so tough already.

General useful approaches we might learn from:

* Freenet caching. This is problematic because it doesn't also include
a guarantee that the document will remain alive 'somewhere'. Since the
choice to drop an unpopular file is a purely local decision, it's not
clear how to apply this to our situation.

* Mojonation micropayments. Micropayments should be a good way to achieve
#1 and #2, but I don't see that it solves #3. They also require either a
lot of complexity in implementation or a centralized bank server (? Mike
-- tell me I'm wrong), which makes them trickier. (Also, I get the feeling
that Mojonation encourages loss of unpopular documents because they're the
first ones to get punted when you run out of space? On the other hand,
with enough storage space and enough users, it will be tough to 'flush'
data from servers. On the third hand, since an adversary to a specific
document knows the subsets of hashspace which that document's shares
went into, he can just go after those subsets. On the fourth hand, it's
tough for him to come up with shares which hash into the target subsets.)
It is clear that we should have some form of micropayment system to
protect against #2, since nothing else will do that. Perhaps it should
be folded into the communications channel?

* Free Haven reputations. I know how to do a scoring system for Free
Haven now. That is, given a bunch of messages from various servers, I
know how to crunch them into a prediction for a given server. This is
also tricky to implement well (given that nobody has done it yet). I
intend to delve further into a design of a scoring system for Free
Haven in the next few months, but that's more to see how complicated
it will actually become than to expect it to be a solution. Reputations
allow us to eventually stop using pseudonyms who are abusing the system
(addressing #1 and #3), but it doesn't provide a mechanism for noticing
when a server has dropped data.

Basically, it doesn't look like any of these addresses the need to
actually make sure the data stays there until we want it to expire.
One approach is to throw so many computers at it that of course it'll
still be there. We haven't yet seen a good way to implement that for
which it's obvious to us that it's robust against targetted attacks from
high-resource adversaries.  The other approach is to somehow monitor
shares in the network, and have somebody you can point at when they
disappear. This is what our buddy system was hoping to do.
I believe with some careful design looking for slip-ups and holes, we
can design something which "is likely to" catch abusers quickly enough
that they can't cause loss of entire documents -- *given* there are no
situations where servers can deterministically undetectably junk shares.

But I think the buddy system had some fatal flaws, such as needing to
include information in a share about the location of its buddy -- thus
allowing an adversary to obtain both shares and then silently eat them.

So I guess this is still an open problem?

There are many threads in this mail. Feel free to break off responses
into different threads as necessary.