[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freehaven-dev] load dependence & free haven





I ran the model past Prof. Mitzenmacher here. He pointed out
that as I've stated it, this seems to be a load-dependent model. 

That is, if each node picks uniformly at random from its store of shares,
then the more shares it has (higher load), the less likely it is any given
share is traded. For example, we could imagine a case in which a node
has *all* the shares of a particular file, but also has 1,000,000 other
shares. In this case all the shares of that file will stay on this node
for a very long time.

He speculated that looking at something along the lines of "You have 10
gigs, and I have 5 gigs, let's trade 2 gigs of stuff", "let's trade some
percentage of our databases", or perhaps just allowing a lot of shares to
be traded in one step might be less load dependent. We weren't
able to come up with anything conclusive one way or the other. 

We know that "in real life" nodes don't actually pick shares to trade
uniformly at random. Instead, nodes have preferences as to what kinds of
shares they are and are not willing to trade. Have we ever specified it
more fully than just "preferences are based on size and exipry date"? Is
it possible that some preferences are more load-dependent than others?

It seems pretty important to make sure that the trading part of the model
actually corresponds to what we're doing...so I'd like to know what other
people think might make sense. 

Anyway, it seems that a load-independent protocol is more desirable, since
we don't know exactly how many people will end up using Free Haven for how
much data. On the other hand, if we could show that the system becomes
more "robust" for some meaning of robustness as the # of files increase,
that might not be so bad.

Thanks, 
-David