[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freehaven-dev] [Freenet-chat] Interesting commentary on Freenet from Freehaven



[sorry for the cross-post, however I think that the information here has
relevance to both philosophical discussion and implementation]

I was recently pointed to a paper describing Free Haven, an interesting
project with similar aims to Freenet (but which doesn't seem to pay much
regard to efficiency), which has an interesting analysis of Freenet from a
security perspective.  The file (postscript) may be found on Freenet at:

freenet:CHK@Y~4xwjIvU9DYUfNCSG0lfILByo0OAwE,WZb1YLRxrnyTWjI4ZVc44A

Or on the WWW at:

http://www.octayne.com/freehaven.ps

You can find the discussion of freenet at Section 3.2.6 Page 28.

It seems to be a pretty well-informed, and in terms of the facts presented
(if not the tone in which they are presented), quite complementary
commentary pitched at a Freenet somewhere between 0.2 and 0.3.  

He points out the HTL threat which we are aware of (ie. HTL and depth fields
can be used to make a reasonable guess as to whether a node was the first to
request some information), but doesn't mention the trivial way to address
this which is to introduce randomness into the HTL and depth
decrements/increments (I thought we had addressed this, but looking at the
code it doesn't appear that it has been implement as yet).  Of course, 0.3's
inter-node encryption would add further layers of anonymity here not
discussed in this paper.

I think the root of the paper is that it talks in terms of "computational
anonymity".  I would characterise Freenet's approach as "pragmatic
anonymity".  We could add a mixmaster layer to Freenet, and impose fake
messages and random delays on messages (to frustrate traffic analysis), but
the performance hit would seriously discourage widespread use.  I would
prefer an anonymous publication system which was anonymous for all practical
purposes.  By this I mean that it would require so much energy to be applied
to break the anonymity as to make it impractical for any government agency
relative to more brute-force solutions (remembering that they can always
break into your home and install monitoring hardware in your computer), but
which was sufficiently efficient to enable widespread use, rather than a
"computationally anonymous" system which was so slow that only the truly
paranoid would consider using it (I fear that Free Haven would fall into
this category).

In terms of the system described in the paper - it seems to be inspired by
Gnutella ("...document requests are performed via broadcast..."), suggesting
that they have not really addressed any of the really difficult problems
that Freenet tackles to achieve a system that people might actually use.

Perhaps there is scope for an ultra-secure (but ultra-slow) version of
Freenet which could bolt on some of the precautions we have discussed (many
of which are outlined in this document).

Just don't expect me to use it!

Ian.

PGP signature