[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [freehaven-dev] Why Free Haven is Hard (part N+1)
On 5 Apr, Roger Dingledine wrote:
> I've got some more observations about why the current Free Haven design
> is Hard. I should get them written down before they get pushed out of
> my memory.
> Reputation in p2p systems is traditionally about performance tracking --
> that node has performed k successful transactions, so since I believe in
> induction I'm more willing to work with him than some other guy with fewer
> successes on record. Indeed, in the case of short-term transactions, such
> as choosing mixes for a path through a mix-net, this approach should work
> quite well: all you need is some indication of who to pick, and you're
> on your way. You aren't risking long-term resources. But in environments
> where the transactions involve a serious exchange of resources, and that
> exchange doesn't happen instantaneously, this approach breaks down.
> Rather than performance tracking, it becomes all about risk management.
> It doesn't matter how many times he has succeeded in the past -- instead
> you need to consider how much he has to gain by leaving the system,
> and how much he has to gain by staying in the system. If he has more to
> gain by bailing, the rational game player would leave.
This is what I've been saying all along. Glad you're hopping aboard.
This doesn't have anything to do with Anderson's and those Stanford
folks' talks, ne? :)
> ...Which leads to pseudospoofing. If the barrier to entry is the same
> no matter how many nodes you sign up (and it seems that it has to be,
> because we don't know how many nodes somebody signs up), then you
> just factor that start-up cost to get back into the system when you're
> deciding whether to bail. Indeed, because the person you're dealing with
> may have lots of different outstanding contracts through many different
> seemingly-unrelated nodes, and because even the outstanding contracts
> with that one node are not published, it seems extremely hard to judge
> whether the entity you're considering will get tipped over the edge by
> your transaction.
I'm not sure this is relevant. Joining and leaving the system are not
necessarily all-or-nothing --- a player can choose to provide services
to some but not all other players. (Some of Mazieres' latest work seems
to be a step towards being able to ensure that it's all-or-nothing, but
in the reputations case it's not clear that that's obviously a good
thing. Whereas it's probably a good thing in Mazieres' test scenario.)
You're really only interested in whether the player finds it in his
best interest to play nice with you rather than to cheat you (e.g. by
quitting out in the middle of providing the service).
> Perhaps we can address the pseudospoofing issue by making sure that no
> single node has incentive to leave. Can we safely say that if no node an
> entity controls has incentive to leave, then the sum of the nodes also
> does not have incentive? That seems a tricky statement, but a very nice
> one if we can say it convincingly.
Again, I don't see that this is an interesting issue --- the game is a
game between players, not between nodes. Nodes are just names for
players in this context. Perhaps a concrete example would help
demonstrate to me why it would be nice to be able to make such a
> (But note that the long term
> build-up-a-reputation-and-then-spend-it-and-leave attack is still
> unsolved, and seems very hard to solve in a system where you can just
> join back as a new person afterwards.)
There's only one way to go about solving it. Make it such that building
up the reputation is more costly than anything the player could get out
of "spending it" (i.e. doing something that causes him to lose
reputation). OR, alternatively but NOT equivalently, make it such that
the benefit the player does to the system in gaining the reputation is
greater than the harm he can do in "spending it". Note that:
1. The second of these is inherently better from a design perspective,
2. Again, I want to point out that they are NOT equivalent,
3. But, they seem to be closely related nevertheless, and people tend
to treat them the same, (largely because people's intuition is that
it's a zero-sum game, but it's usually not),
4. Finally, a real solution will probably be a combination of the two,
or rather something in the space between them.
> We can address the one-node's-incentive problem by centralizing the
> system more. We choose a few trusted seeds. Contracts get registered
> at these seeds or their designees in the trust web, and you don't get
> paid your reputation capital unless a registered contract successfully
> completes. Maybe your payment is escrowed somewhere too. Because agents
> know more about the other contracts a node in question has, they can
> make more enlightened choices about whether that single node is scary.
Who's watching the watchers?
And this, of course, assumes that successful completion of contracts is
verifiable. A lot of good work (witness-related stuff, for example)
has to do with making contracts more verifiable. But, perhaps, in the
end, we will have to tackle the fact that there are unverifiable
transactions, and possibly also that "verifiable" ones are merely
superstructures laid on top of fundamentally unverifiable transactions.
(E.g. you can't always verify that the witnesses aren't lying.)
> ...Which leads to location pseudonymity. It doesn't exist. There
> aren't even any *designs* that are practical and would provide
> sufficient long-term protection for a trust bottleneck like the above
> contract-registration servers. Long-term nyms are a hard and ongoing
This sort of came out of the blue. I guess it makes sense in the
Freehaven framework, but I am interested in solving the hard reputation
problems in a simpler framework first... Then applying them to more
WORDS IN THE HEART CAN NOT BE TAKEN.
-- Terry Pratchett, "Feet of Clay"
Chris Laas: KB1DEM \ golem (617)868-4472 \ MIT LCS, SIPB, TOE