[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [freehaven-dev] retrieval issues
On Sunday 25 March 2001 03:39, you wrote:
> I found a number of p2p systems (perhaps only fledgling systems, it's
> true) at the SF conference that handle meta-data.
I wasn't in SF (too expensive, we currently are at 2.20DM per US$ :( ) and
the book has been released just yesterday in Germany. I didn't had enough
time to read everything yet.
> There's a chapter
> about it in the p2p book, but I haven't read it in detail.
I read it: It is basically a call for standardization of content-description
to keep todays central point of information (WWW) as a possible standard
The article warns about the fallback into the pre-Web era, where you had one
client per service and no glue to bridge the gaps between the resources.
The question I think about is (this is especially true for freenet and for
freehaven): How do I find all those dissidents messages, if I dont know the
> Perhaps try
> the bluesky or p2p-hackers list for more info about metadata systems?
I will check these out. Especially bluesky might be interesting as transarc
formerly hosted and was involved in developed of the distributed searchengine
> > Another point that made me think is the potential resource issue: If the
> > network needs to be fully connected, every machine needs to know every
> > other.
> The flaw with the current Free Haven design is not that it requires
> a lot of memory on each node (memory is cheap these days, and keeping
> trust data on a given node is quite cheap).
> The flaw is that we need
> to broadcast to every participating node. From a practical standpoint,
> this is suicide -- node bandwidth simply cannot handle this. We *must*
> use a search system which queries at most log n machines (where n is
> the number of participating servers). Otherwise we're sunk.
A client might decide to query only a few hosts each time, ordered by
reputation, worst case stays #Servers - (n-k). (Where n and k are the
shares/required numbers) but the aktual number of queries required will be
> Speaking of which: "Since the network needs a high number of servers in
> the servnet to fulfill the anonymity claim" is not at all clear to me.
> The mixnet needs to have a lot of participants, yes, but if Free Haven
Ok, sorry, I mixed that up I think.
> Let me know if you have any questions or are confused about anything,
Currently I am reading a lot of P2P System-Descriptions. I havent studied the
Infrasearch protocol yet, but the mass of publications and protocol designs
seems to be overhelming (sp?) and one could say, that I dig out more than I
can ever read :)
I try to describe the retrieval-possibilities in the different P2P networks,
and the possibilities to "show whats in there" in a standardized manner. I am
looking from a users point of view, that wants to find some resources. If
this P2P business goes on like this, we will soon have a greater mess than
the current search engines on the web are.
What I am confused about is the fact that FreeHaven is very good at hiding
stuff. But how am I supposed to find anything of interest for me inside, if I
dont know the name?
A problem I see is, that FreeHaven (other Networks as well, but this is the
Freehaven list :)) has little possibilities to reveal its contents in the
same anonymous manner it is hiding them.
One Idea that just came to my mind is the trading of some kind of
content-messages for a given hash. The Message expires the same time the
content does, and it is broadcasted regulary. A participating index-server
could be an interface to a searchengine, catching the
content-message-broadcasts and creating the list. Noone could grab the index
at once, but if you listen long enough, you will know whats available.
To ensure content-descriptions are authentic, they should be signed by the
If there finally something is implemented to describe the contents of
freenet, the "Filename" H(PKdoc) is to be published as well. Now there is a
clear description what a certain Hash contains. If I guess, that there is
only correct Metadata, Freehaven looses passive-server document anonymity.
On the other hand, servers could deny content based on the metadata, which
would reduce the social pressure component. (if metadata is correct)
Many thanks for your reply,
Thomas Strauß | "He breathed in the chill kelp-and-salt scent of the beach;
LeipzigerStr 61 | the intense familiarity of the scent triggered a million
66113 Saarbrücken| memories at once, and he knew he was home."
+49-681-5892772 | (aus "Green Mars", Kim Stanley Robinson)