[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Hello directly from Jimbo at Wikipedia



On Tue, Sep 27, 2005 at 05:48:59PM -0400, Jimmy Wales wrote:
> All I'm saying is that Tor could segregate users easily enough into two
> clouds: "We sorta trust these ones, more or less, a little bit, but no
> guarantees" -- "We don't trust these ones, we don't know them".

This would be very difficult to do using the existing Tor design as it
doesn't know anything about users or sessions. It lives at the TCP
layer and all it does is shift packets from one IP address to another,
giving some privacy to both ends. Adding higher layer functionality to
Tor increases the chance that it will do neither job well, so here is
a proposal which I think does what you want, but avoids this problem.

The goal is to increase the cost for a Tor user to commit abuse on
Wikipedia. It doesn't need to be full-proof, but just enough to make
them go elsewhere. Wikipedia could require Tor users to log in before
making edits, and ban accounts if they do something bad. However the
cost of creating new accounts is not very high. The goal of this
proposal is to impose a cost on creating accounts which can be used
though Tor. Non-Tor access works as normal and the cost can be small,
just enough to reduce the incentive of abuse.

Suppose Wikipedia allowed Tor users to only read articles and create
accounts, but not able to change anything. The Tor user then goes to a
different website, call it the "puzzle server". Here the Tor user does
some work, perhaps does a hashcash computation[1] or solves a
CAPTCHA[2], then enters the solution along with their new Wikipedia
username. The puzzle server (which may be run by Wikipedia or Tor
volunteers), records the fact that someone has solved a puzzle along
with the username entered. The puzzle server doesn't need the
Wikipedia password as there is no reason for someone to do work for
another person's account.

Now when that Tor user logs into their Wikipedia account to edit
something, the Wikipedia server asks the puzzle server whether this
account has ever solved a puzzle. If it has, the user can make the
edit, if not then the user is told to go to the puzzle server first.
This check can be very simple - just an HTTP request to the
puzzle server specifying the Wikipedia username, which returns "yes"
vs "no", or "200" vs "403". For performance reasons this can be
cached locally. There is no cryptography here, and I don't think it is
needed, but it can be added without much difficulty.

If the Tor user starts committing abuse, his account is cancelled. The
puzzle server doesn't need to be told about this, as Wikipedia will
not let that user make any edits. The reason this approach avoids the
usual problems with proof-of-work schemes[3] is that good Tor users
only have to solve the puzzle once, just after they create the
account. Bad Tor users will need to solve another puzzle every time
they are caught and had their account cancelled.

So my question to Jimbo is: what type of puzzle do you think would be
enough to reduce abuse through Tor to a manageable level? The
difficulty of the puzzle can be tuned over time but what would be
necessary for Wikipedia to try this out?

Hope this helps,
Steven Murdoch.

[1] http://www.hashcash.org/
[2] http://www.captcha.net/
[3] "Proof-of-Work" Proves Not to Work by Ben Laurie and Richard
     Clayton: http://www.cl.cam.ac.uk/users/rnc1/proofwork.pdf
-- 
w: http://www.cl.cam.ac.uk/users/sjm217/

Attachment: pgpOQYBzmZfnp.pgp
Description: PGP signature