[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Hello directly from Jimbo at Wikipedia

Nick Mathewson wrote:
> What about logins for new users on Tor only?  That is, suppose you
> allowed non-logged-in posts, and allowed posts with Tor, but not
> non-logged-in posts with Tor.  Would that also be a nonstarter?

This is entirely possible, but of course there are holes in it.

First, having a login id doesn't mean that we trust you, it just means
that you've signed up.  One of the reasons that we don't _require_ login
ids, actually, is that it allows jerks to self-select by being too lazy
to login before they vandalize. :-)

But, we could do something like: allow non-logged in posts, and allowed
posts with Tor *for trusted accounts*, but not non-logged-in posts with
Tor, and not logged-in-but-not-yet-trusted accounts with Tor.

Still, there's a flaw: this means you have to come around to Wikipedia
in an non-Tor manner long enough for us to trust you, which pretty much
blows the whole point of privacy to start with.

> For reference, the proposal is (verbatim):
>     Here is a simple solution to the problem of Tor users being unable to
>     edit Wikipedia
>     trusted user -> tor cloud -> authentication server -> trusted tor
>     cloud -> wikipedia
>     untrusted user -> tor cloud -> authentication server -> untrusted tor
>     cloud -> no wikipedia
>     Simple.
> I'm sure you realize that there's a lot of gray area in this design,
> so let me try to fill some of it in, and I'll comment as I go.
> Clearly, users are authenticating to the authentication service using
> some kind of pseudonymous mechanism, right?  That is, if Alice tells
> the auth server, "I'm Alice, here's a password!", there's no point in
> having a Tor cloud between Alice and the authentication server.  So
> I'm assuming that Alice tells the authserver "I'm user999, here's a
> password!"  But if "user999" isn't linkable to Alice, how do you stop
> an abusive user from creating thousands of accounts and abusing them
> one by one?

You have to establish trust in some fashion.  I think Tor is in a better
position to figure out who to trust among their userbase than we are.
(Since all we get is a bunch of vandalism from a bunch of Tor exit servers.)

> Second, the authentication server needs to be pretty trusted, but it
> also needs to be able to relay all "trusted" users' bandwidth.  That
> introduces a bottleneck into the network where none is needed.
> (There's a similar bottleneck in the "trusted cloud" concept.)

One might choose to put more resources into the trusted cloud than the
nontrusted cloud, so that for trusted users, there's a net performance

Does it really need to be able to relay all trusted user's bandwidth?  I
don't see why.  The authentication merely needs to hand out a 'trusted'

And remember, perfection is not needed.  A completely non-hackable model
of trust is not needed.  All that is needed is to sufficiently raise the
ratio of "trust" in the trusted cloud so that we can put up with the
remaining abuse.

> Third, how do users become trusted or untrusted?  Who handles abuse
> complaints?

I don't know.  I think this is a great question.

> The weak point here is the transition between step 2 and step 3.
> Unlike your design, this doesn't fit exactly into mediawiki's existing
> "these IPs are good; these IPs are blocked" implementation, so more
> code would be needed.  Other interfaces could be possible, of course.

That seems to me to be no major problem.  A digitally signed token from
Tor which says, in effect, "No guarantees, but this user is Alice has
been around for a few months, and hasn't caused any trouble, so we
figure they are more or less ok" would be fine.  And if that user causes
us grief, then we just say "Sorry, Alice, even though Tor thinks you're
ok, we're blocking you anyway."

Or the signed token could say "Here's a random user, a new account, we
don't know anything about him, his name is Bob" -- and we can choose to
block it or accept it, based on empirical evidence.

> How does this sound to you?

It sounds great.  If you digitally sign the tokens and publicize simple
code for checking the signatures, then lots of services would be able to
take advantage of it.