[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Abuse resistant anonymous publishing



Hi George, Ben, et al,

George and I have talked about using some of these ideas for anonymous
communication a handful of time over the last few years.  I think they
have clear potential for communication and maybe also for the
tradeoffs of abuse resistance and anonymity in publishing, but I'm a
little more hesitant on applying them in the Wikipedia context.

Note also that I consider the things George mentioned as an attempt on
the longterm problem, not a relatively quick fix as some of the others
are intended to be.

The human approach strengthens trust and I'm guessing it will prove
one of the few good solutions to sybil issues, but can increase
vulnerability to rubber-hose cryptanalysis depending how it's set
up. The trust/responsibility trail is also pseudonymous rather than
anonymous, which exacerbates the problem.

If Alice makes a post that someone doesn't like, they can apply
directed pressure along the path to Alice. People are then put in a
position of either revealing the next hop in the chain to Alice or
falling victim to whatever countermeasures the coercers would apply to
them or their loved ones. This might be ameliorated by making it a
system of threshold entities, but if those entities are still
manageable enough to function I think they will still be rubber-hose
vulnerable, not to mention that you probably just increase the
intersection information if it is indeed based on who-you-know
properties.

For the latest posting on Lie Groups or something, you probably don't
need to be worried about the above concerns. (Although recall
Kaczynski and the guy who's name escapes me that took out his wife
with a ballpeen hammer about a decade ago because there was an error
in the algebraic topology book he wrote. Mathematical's got postal
beat all to hell ;>) But many people have been raising examples of
dissidents in repressive societies. I don't think any of the proposals
to date will provide such people good protection. A trust tree that
cuts across jurisdictions would help, but would also be hard to form.

aloha,
Paul

On Thu, Sep 29, 2005 at 02:56:57PM +0100, George Danezis wrote:
> Hi or-talk (and Ben),
> 
> I am sorry to be jumping in the middle of the wikipedia-Tor debate, but Steven 
> Murdoch just made me aware of it. A while back I had a short discussion with 
> Roger about a possible way of mitigating abuse through anonymity systems like 
> Tor on open publishing systems like wikipedia (and with additional precautions 
> Indymedia). I have further discussed this with Ben Laurie at PET 2005.
> 
> The basic idea is quite simple: anonymity allows users to avoid being 
> associated with a persistent identifier that could be used to filter out abuse 
> cheaply. It is in fact a Sybil attack, ie. one user can pretend to be multiple 
> users. Note that this can also happen if one controls many nodes (through a 
> bot net for example). The aim of our protocol is to be able to associate 
> persistent identifiers, with posts that are controversial (through a process 
> that is defined), to be used to filter abuse (note that these do not have to 
> be an identity, but only be useful to filter abuse). We should also try to 
> maintain the user's anonymity, and at least plausible deniability.
> 
> My favorite approach in solving these problems is using and assuming the 
> existence of social networks.