[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-talk] registration for youtube, gmail over Tor - fake voicemail / sms anyone?



On Tue, Oct 16, 2012 at 12:55 PM, Andrew Lewman <andrew@xxxxxxxxxxxxx> wrote:
> I guess $20 is more than $1 for 1000 CAPTCHA breaks, but I guess that's
> because the survivor isn't criminal minded enough to steal/clone
> someone's phone for the sms message.

It isn't just the phoneâ the effort required to perform that set of
activities was a non-trivial costâ but one acceptable to a person with
an earnest need of increased anonymityâ,  which also created
geographic restrictions which limited the use of cheap labor in other
locations. Not to mention the cost of the knowledge of how to do
whatever workaround you provided, the cost of convincing an internet
privacy expert to help them, etc...

Maybe at some point someone will build an industrial infrastructure
that abuses the ability to buy disposable phones and resell them and
then Google will have to adapt. But at the momentâ

Fundamentally all of these attacks and their defenses are operating in
the space of a constant linear work factor.  You must do one unit of
"effort" to send a valid email, the attack must do N units _or less_
to send N units of spam/crapflood/etc.   No system that puts an
attacker at merely a simple linear disadvantage is going to look
"secure" from a CSish/cypherpunk/mathmatical basis. And in
consideration of the total cost, the attacker often has great
advantages: he has script coders in cheap labor markets whilee your
honest activist is trying to figure out where the right mouse button
is...

But the fact of the matter is that the common defense approaches are,
in general, quite effective.   Part of their effectiveness is that
many service providers (including community driven/created ones like
Wikimedia) are broadly insensitive to overblocking. Attackers are far
more salient: they are adaptive and seek out all the avenues and are
often quite obnoxious, and there are plenty of honest fish in the sea
if you happen to wrongfully turn away a few.  When considering the
cost benefit tradeoffs one attacker may produce harm greater than the
benefit of ten or a hundred honest usersâ and almost always appears to
be causing significant harmâ and so it can be rational to block a
hundred honest users for every persistent attacker you reject
(especially if your honest users 'value' is just a few cents of ad
income). This may be wrongâ in terms of their long term selfish
interests and in terms of social justiceâ but thats how it is, and
it's something that extends _far_ beyond TOR. For example, English
Wikipedia blocks a rather large portion of the whole IPv4 internet
from editing, often in big multi-provider /16 swaths at a time. Tor is
hardly a blip against this background of over blocking. Educating
services where their blocking is over-aggressive may be helpful but
without alternatives which are effective it will not go far beyond
just fixing obvious mistakes.

Andâ the effectiveness is not just limited to the fact that the
blocking rejects many people (good and bad alike) there are many
attacks like spamming which are _only_ worthwhile if the costâ
considering all factors like time discounting, knowledge, geographic
restrictionsâ is under some threshold.. but below that threshold there
is basically an infinite supply of demand.  The fact that the level
abuse is highly non-smooth with the level of defense makes it quite
attractive to make some blunt restrictions that screw a considerable
number (if only a small percentage) of honest users while killing most
of the abuse.

On Tue, Oct 16, 2012 at 1:51 PM, k e bera <keb@xxxxxxxxxxxxxx> wrote:
> Why are anonymous signups assumed guilty of abuse before anything happens?  How about limiting usage initially, with progressive raising of limits based on time elapsed with non-abusive behaviour (something like credit card limits)?  People should be able to establish good *online* reputations that are not tied to their physical identity.

I think this common but flawed thinking that prevents progress on this
front.  You're thinking about this in terms of justice. In a just
world there wouldn't be any abusers... and all the just rules you can
think of to help things won't matter much because the abusers won't
follow them, and we don't know how to usefully construct rules for
this space that can't be violated. (...and some alternative ideas like
WOTs/reputation systems have serious risks of deeper systematic
kafkaesque injustice...).  And of course, _anonymous_ is at odds with
_reputation_ by definition.

The whole thinking of this in terms of justice is like expecting
rabbits and foxes to voluntarily maintain equilibrium population so
that neither dies out. That just isn't how it works.

Is it possible that all the communication advances we've made will be
wiped out by increasing attacker sophistication to the point where
turing test passing near-sentient AIs are becoming best friends with
you just to trick you into falling for a scam and we all give up this
rapid worldwide communication stuff? Will we confine ourselves to the
extreme group-think of exclusive friend-of-a-friend networks, and
excommunication for friends who make the mistake of befriending the
/wrong/ kind of people (e.g. Amway dealers)?  Well, that is silly and
extremeâ though after seeing the latest generation spambots over the
past few years that manage to do _worthwhile_ tech support on forums
by splicing together search engine harvested answers from
documentation and other forums it seems somewhat less farcical to me
than it used to.

If we're to avoid those risks we need to be willing to relax the
stance on justice a bit and accept that distrusting strangers,
disallowing complete anonymity, and making identities costly are
_effective_...  We must be sympathetic to the people who are suffering
with the abuse and struggling to get it out of their way and think
about what acceptably effective alternatives are the most just even if
they a still a little unjust.  e.g. being required to 'buy'
pseudonymous identities by making charitable contributions or putting
up a bond of valuable limited supply tokens, when we really think and
prefer that true anonymity ought to be allowed and unencumbered with
barriers to prevent abuse. Until there are easily deployable options
"block any proxy or anonymity network that you observe causing a
problem" is a simple and largely effective measure which we can
continue to expect to see used as a first option defense against
abuse.
_______________________________________________
tor-talk mailing list
tor-talk@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk