[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Padding, multiplexing and more
On Tue, 24 Dec 2002, Lucky Green wrote:
> This is what I believe killed ZKS's Freedom. The early adopters knew
> that the system was insufficiently secure against a resourceful
> attacker. ZKS, erroneously, believed that in producing a product that
> defends against some percentage of attacks, say 98%, they could capture
> most of the of the market. Instead, Freedom captured about the same
> percentage of the market as human blood transfusions guaranteed to be
> 98% free of HIV virus would. Some product groups offering 98% security
> do not just capture a slightly reduced market share, but experience
> difficulties to find any market at all. Anonymizing systems fall into
> this category.
But then, how do you explain that the Anonymizer works? In particular as
the level of anonymity it offers is much smaller than the one Freedom
provided?
> Given the close to 10 years that I have been seeking such a system, few
> will deplore this fact more than I. To change it, we need both
> qualitative and quantitative analyses of what impact the various
> techniques employed have on security.
>
> Any proposed design needs to be able to answer questions such as the
> following:
>
> Assuming the attackers have access to [just about anything an entity
> with near-global subpoena power plus the ability to compromise upstream
> ISP router can obtain], are using the best known mathematical models to
> correlate users with sites visited [Laplace transforms, Bayesians, NSA
> forests(?), etc.], and furthermore the attackers operate n of the m
> nodes through which the users has chosen to route, including the
> [entrance hop, exit hop, none of the above], then what is the
> probability p for the user to be identified when transferring r MB of
> payload data/being online for q hours, etc.
The problem with anonymity - especially when talking about low-latency
applications - is that I (and I guess many others) stongly believe
that there is no system that provides strong anonymity against very
powerful attackers while actually being usable and supporting a large
number of users. Just take the global passive attacker: as we do not
control the exit node -- web server link, all user activities are
visible there. SO we must protect that same user activity on the
user -- first node link. Anything less than constant traffic on that
link won't work, as the observer will eventually be able to correlate
the activities at the end points. But even constant traffic won't work,
as end systems otr applications fail and traffic won't make it through at
times. In addition, the nodes will be quite busy just absorbing the huge
load of dummy traffic from end users and so on...
But we have no clue if that global observer is actually possible. In
general we seem to have problems defining realistic adversaries, and
then design systems that protect against them. Why is this so difficult?
Probably because the adversary has to be distributed. Compare it with
crypto: defining an adversary seems much easier, because the data only
needs to be intercepted at one place and can then be analyzed. But no
crypto that is actually used is prooven to be secure, so we 'hope there
is no adversary that can break it'. Nobody knows there is no such adversary,
and people happily use crypto to protect all kinds of stuff. They trust
the systems.
So before analyzing anything, we need an adversary model. And I'm speaking
of a realistic one and not the one that 'might be out there, but that's very
very unlikely to exist'. That won't be easy as breaking an anonymizing system
requires much more than fast computers and math. Talking about the passive
attacker, then it is mainly an organisational and politcal problem to
convince several entities to collect and disclose data.
I'd be very interested to start discussing realistic adversaries.
--Marc