[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [freehaven-dev] Some possible weaknesses?
On Tue, Feb 01, 2000 at 11:17:46PM -0500, Michael J. Freedman wrote:
> I was out of town this last week of IAP, so missed the Sunday meeting. But
> after reading over Proposal 1.0 and the archived threads, I had a few
> notes, although they might have already been answered:
> Statistical attacks:
> My understanding: the amount a server is able to store in the servnet is
> proportional to the amount of space it provides to the servnet (Section
> 5.2). Is this is raw size (i.e., megabytes). If so, it feels that one may
This is raw size -- but note that with all the overhead we've got (backup
copies, buddy shares, the whole information dispersal algorithm (creating
many shares for a single file), you're really only going to be able to store
a file perhaps a tenth the size of the space you provide. But hey, what's a
constant term here or there...:)
> perform some statistical analysis to try to determine from where a document
> originates. In a very simple case, given 5 servers: 4 store 1 MG, 1
> stores 50 MB. The very large files floating around the servnet very likely
> originated from the last server.
However, we're hoping that there are a lot of servers out there, so it won't
be simply a matter of "the server with the 50 meg capacity". (And note that
it's still pretty tricky to figure this one out, because they're all behind
mixnets and potentially unknown to the adversary.)
Indeed, if you make a very large file (or very long-lasting, or very extreme
in any way) then you're making a file that will stand out. You are reducing
your security by doing so, and you know it. So you deserve whatever happens.
Realistically, I'm expecting share sizes of perhaps a meg or *maybe* ten megs,
and server capacities in the range of gigs. So I don't think this will be a
> A keeps a copy of what it shared with B around for a while - I'm assuming
> the A will query B to ensure that B has not lost/corrupted the file. Then,
> after B's trust has increased to some threshold, A will permit it's copy to
> be lost. This is the same (yet unanswered?) problem as to B maliciously
> gaining trust, and then only behaving as a "evil server" until it passes
> certain trust threshholds.
Well, this is partially answered.
It's answered in part by the fact that the probation period for a new node is
difficult enough that causing your node to 'turn evil' won't generally be
worth it. An evil node will be noticed relatively quickly by the buddy system
or by simple checking after trading ("did you lose my share? what,
already??"), and so it can only do a limited amount of damage before word
gets around that it's acting funny. The shares concept will make that damage
not actually affect any of the documents in the system.
In addition, the preliminary idea for the trust network is to never trust the
fellow with more data than you have already gotten work out of him. Meaning,
if he has successfully stored 50 megs for you for a month, then you should
have at most 50 meg-months of outstanding shares "lent" to him. This means
that servers will provide at least 50% usefulness, even if they try to gain
trust and then 'go evil'.
> Buddy System:
> Using two corrupted servers, we attempt to trade around shares such that
> the 2 copies are on the two corrupted servers. As repetition increases,
> even with more corrupted servers (physically harder to attain), the
> difficulty in migrating all copies to corrupted servers should increase.
> More repetition obviously adds overhead. It probably would be useful to
> study how many copies are necessary for "minimal robustness."
This is an interesting idea to consider.
How would you study this?
How would you define 'minimal robustness'?
> Hope these are useful,