[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freehaven-dev] Why Free Haven is Hard (part N+1)

I've got some more observations about why the current Free Haven design
is Hard. I should get them written down before they get pushed out of
my memory.

Reputation in p2p systems is traditionally about performance tracking --
that node has performed k successful transactions, so since I believe in
induction I'm more willing to work with him than some other guy with fewer
successes on record. Indeed, in the case of short-term transactions, such
as choosing mixes for a path through a mix-net, this approach should work
quite well: all you need is some indication of who to pick, and you're
on your way. You aren't risking long-term resources. But in environments
where the transactions involve a serious exchange of resources, and that
exchange doesn't happen instantaneously, this approach breaks down.

Rather than performance tracking, it becomes all about risk management.
It doesn't matter how many times he has succeeded in the past -- instead
you need to consider how much he has to gain by leaving the system,
and how much he has to gain by staying in the system. If he has more to
gain by bailing, the rational game player would leave.

...Which leads to pseudospoofing. If the barrier to entry is the same
no matter how many nodes you sign up (and it seems that it has to be,
because we don't know how many nodes somebody signs up), then you
just factor that start-up cost to get back into the system when you're
deciding whether to bail. Indeed, because the person you're dealing with
may have lots of different outstanding contracts through many different
seemingly-unrelated nodes, and because even the outstanding contracts
with that one node are not published, it seems extremely hard to judge
whether the entity you're considering will get tipped over the edge by
your transaction.

Perhaps we can address the pseudospoofing issue by making sure that no
single node has incentive to leave. Can we safely say that if no node an
entity controls has incentive to leave, then the sum of the nodes also
does not have incentive? That seems a tricky statement, but a very nice
one if we can say it convincingly.

(But note that the long term
build-up-a-reputation-and-then-spend-it-and-leave attack is still
unsolved, and seems very hard to solve in a system where you can just
join back as a new person afterwards.)

We can address the one-node's-incentive problem by centralizing the
system more. We choose a few trusted seeds. Contracts get registered
at these seeds or their designees in the trust web, and you don't get
paid your reputation capital unless a registered contract successfully
completes. Maybe your payment is escrowed somewhere too. Because agents
know more about the other contracts a node in question has, they can
make more enlightened choices about whether that single node is scary.

...Which leads to location pseudonymity. It doesn't exist. There
aren't even any *designs* that are practical and would provide
sufficient long-term protection for a trust bottleneck like the above
contract-registration servers. Long-term nyms are a hard and ongoing