[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Hello directly from Jimbo at Wikipedia



On Tue, Sep 27, 2005 at 01:46:13PM -0400, Jimmy Wales wrote:
 [...]
> 
> WE ARE ON THE SAME SIDE.

Right on.

There are reasonable people working on both of these projects, but as
you know, there will always be unreasonable people on both sides.
Please realize that the contributing Tor developers, at least, don't
share the "Jimbo==Big Brother!!!1!" view: we know that you aren't
anti-privacy, any more than we're pro-abuse.

Hey folks -- the reason that Wikipedia (and other services) use IPs to
block users is not stupidity, laziness, or ignorance.  People use
IP-based blocking because it limits abuse better than no blocking at
all.  Blocking IPs is not saying, "I hate privacy, I think IPs do and
should map 1:1 to human beings, and abuse is an ISP problem; and Tor
doesn't exist."  It's saying, "I can't deal with the abuse I'd see if
I didn't block some IPs, and while IP blocking is imperfect, it's
about as good as any other scheme I have had the time so far to
implement."

People don't block IPs because they think IPs are people, or because
they've never heard of NAT.  They block IPs because IPv4 addresses are
(for most people, at the moment, to a first approximation) a somewhat
costly{1} resource.  When they block Bob's IP, the theory is that they
force him spend the effort to move to a new IP before he can abuse
their service again.{2}

So therefore, this isn't about identity; this is about economics.
Replacing IP addresses with a different kind of identifier won't work
unless that identifier takes enough human effort to renew.  Free
website accounts accounts don't usually have this property.  Valid
credit card numbers (as Geoff notes) do indeed have this property, but
not everybody has one.  Email addresses in some domains (@harvard.edu)
have this property; email addresses in other domains (@hotmail.com)
don't.

These resources don't need to be perfect, unstealable, or uncloneable!
The goal is to reduce abuse; making abuse impossible is not a
requirement.

Similarly, these resources don't need to be privacy-preserving, or
unlinked from individuals.  Using blind-signature based credential{3}
systems, it's possible to use a sensitive, identifying resource (like
an @harvard.edu address or a credit card or a phone number) to
bootstrap a pseudonymous resource.  Now that the first blind signature
patents have started to expire, this approach is more workable than it
would have been before.  I think that approaches like this are pretty
promising, but need a lot of work to be practical.

Such a scheme wouldn't need to be built into Tor.  It could pretty
easily be a system-independent architecture that could work with
any number of privacy-enhancing layers.

Jimmy and others: do you think I'm on the right track above?  I'm
trying to design a system sorta based on the above principles, but as
you can see, there is a lot of fuzziness.

To everybody in this discussion: here are some things that might make
you feel better in the short run, but which will ultimately not help.
  - Trying to convince Jimbo that privacy is good, or that reducing
    false positives would serve wikipedia's goal of openness.  He
    knows.
  - Trying to convince Tor developers that abuse is bad, or that
    reducing abuse would server Tor's goal of widespread acceptance.
    We know.
  - Trying to convince Tor developers to subvert services attempts to
    block Tor exit connections{4} based on IPs.  We won't; it would be
    wrong.
  - Trying to convince Wikipedia operators that privacy is evil _per
    se_, and should be thwarted regardless of potential for abuse in
    particular instances.  I doubt they'd buy it; it doesn't look like
    Jimbo will.
  - Trying to convince the world that some wikipedians have an unnuanced
    view of Tor and anonymity and false-positives.  We know.
  - Trying to convince the world that some Tor operators and users have an
    unnuanced view of IP blocking and abuse.  We know.

Here are some things that would be harder, but which would probably be
useful:
  - Try to develop better understanding of why and whether abuse prevention
    mechanism, even the ones you think are crappy, work in practice.
  - If a hypothetical abuse prevention mechanism wouldn't work,
    explain why not.
  - When it looks like somebody is saying something utterly stupid or
    insane, try to figure out why, from their point of view, it might
    seem reasonable to say such a thing.{5}
  - Come up with workable ways to prevent abuse that don't damage
    privacy or preclude anonymizing layers like Tor.
  - Implement those models, and try them out.

There are probably other helpful things, too.

{1} Costly in effort or in cash.

{2} Yes, the theory breaks down.  Some IPs (like those of Tor servers
    and other, less privacy-focused, proxies) aren't costly enough to
    change, so services are tempted to restrict them out of hand, or
    give them some kind of probationary status.  Other IPs (like those
    use by large NATed apartment buildings and ISPs) are too costly to
    change, and are shared by many users, so services that care about
    availability have to play games with temporary blocks and the
    like.

{3} http://en.wikipedia.org/wiki/Blind_signature has a decent
    introduction.

{4} We're okay with subverting entry-blocks.  This isn't hypocrisy;
    this is because entry blocks are fundamentally different.  When
    Alice connects to Tor to connect to Bob, an exit block means that
    Bob doesn't want anonymous connections, whereas an entry block
    means that somebody doesn't want Alice to have privacy.  Entry
    blocking subverts Alice's self-determination, whereas exit
    blocking on Bob's part *is* self-determination, even if we don't
    like it.

{5} http://en.wikipedia.org/wiki/Principle_of_charity

peace,
-- 
Nick Mathewson

Attachment: pgp20issz2NaX.pgp
Description: PGP signature