[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

[freehaven-cvs] First version of random grafs of usability chapter



Update of /home/freehaven/cvsroot/doc/wupss04
In directory moria.mit.edu:/tmp/cvs-serv20335

Added Files:
	usability.tex 
Log Message:
First version of random grafs of usability chapter

--- NEW FILE: usability.tex ---
\documentclass{article}
\usepackage{url}
\pagestyle{empty}

\newenvironment{tightlist}{\begin{list}{$\bullet$}{
  \setlength{\itemsep}{0mm}
    \setlength{\parsep}{0mm}
    %  \setlength{\labelsep}{0mm}
    %  \setlength{\labelwidth}{0mm}
    %  \setlength{\topsep}{0mm}
    }}{\end{list}}

\begin{document}

\title{Anonymity Loves Company -- Usability as a Security Parameter}
\author{Roger Dingledine \\ The Free Haven Project \\ arma@freehaven.net \and
Nick Mathewson \\ The Free Haven Project \\ nickm@freehaven.net}

\maketitle
\thispagestyle{empty}

While security software is the product of developers, the operation of
software is a collaboration between developers and users.  It's not enough
to develop software that can be used securely; software that isn't usable
often suffers in its security as a result.

For example, suppose that there are two popular mail encryption programs:
HeavyCryto, which is more secure (when used correctly), and LightCrypto,
which is easier to use.  Suppose you can use either one, or both.  Which
should you choose?

You might decide to use HeavyCrypto, since it protects your secrets better.
But if you do this, it's likelier that when your friends send you
confidential email, they'll make a mistake and encrypt it badly or not at
all.  With LightCrypto, you can at least be more certain that all your
friends' correspondence with you will get a minimum of protection.

What if you used {\it both} programs?  If your tech-savvy friends use
HeavyCrypto, and your less sophisticated friends use LightCrypto, then
everybody will be getting as much protection as they can.  But can all your
friends really judge how able they are?  If not, then by supporting a less
usable option, you've made it likelier that they'll shoot themselves in the
foot.

The key insight here is that, in email encryption, the cooperation of
multiple people is needed to keep you secure, because both the sender and the
receiver of a secret email want to protect its confidentiality.  Thus, in
order to protect your own security, you need to make sure that the system you
use is not only usable by yourself, but by the other participants as well.

This doesn't mean that it's always better to choose usability over security,
of course: if a system doesn't meet your threat model, no amount of usability
can make it secure.  But conversely, if the people who need to use a system
can't or won't use it correctly, its ideal security properties are
irrelevant.

* How bad usability can thwarts security

[[Brainstorm up a big list.  Possibilities include:
  - Useless/insecure modes of operation.
  - Confusion about what's really happening.
  - Bad mental models.
  - Too easy to exit system.
  - Too easy to social-engineer users into abandoning.
  - Inconvenient, therefore abandoned. (People write down long passwords.)
  - 
  - ....
]]

* Usability is even more a security parameter when it comes to privacy

Usability is an important parameter in systems that aim to protect data
confidentiality.  But when the goal is {\it privacy}, it can become even
more so.  A large category of {\it anonymity networks}, such as XXX, XXX,
and XXX, aim to hide not only what is being said, but also who is
communicating with whom, which users are using which websites, and so on.
These systems are used by XXX, XXX, XXX, and XXX.

Anonymity networks work by hiding users among users.  An eavesdropper might
be able to tell that Alice, Bob, and Carol are all using the network, but
should not be able to tell which one of them is talking to Dave.  This
property is summarized in the notion of an {\it anonymity set}---the total
set of people who, so far as the attacker can tell, might be the one engaging
in some activity of interest.  The larger the set, the more anonymous the
participants.\footnote{Assuming that all participants are equally plausible,
of course.  If the attacker suspects Alice, Bob, and Carol equally, Alice is
more anonymous than if the attacker is 98\% suspicious of Alice and 1\%
suspicious of Bob and Carol, even though the anonymity sets are the same
size.  Because of this, recent research is moving beyond simple anonymity
sets to more sophisticated measures based on the attacker's confidence.}
Therefore, when more users join the network, existing users become more
secure, even if the new users never talk to the existing ones!

There is a catch, however.  For users to keep the same anonymity set, they
need to act like each other.  If Alice's client acts completely unlike Bob's
client, or if Alice's messages leave the system acting completely unlike
Bob's, the attacker can use this information.  In the worst case, Alice's
messages are distinguishable entering and leaving the network, and the
attacker can treat Alice and those like her as if they were on a separate
network of their own.  But even if Alice's messages are only distinguishable
as they leave, an attacker can use this information to break exiting messages
into ``messages from User1,'' ``messages from User2,'' and so on, and can now
get away with linking messages to their senders as groups, rather than trying
to guess from individual messages.  Some of this {\it partitioning} is
inevitable: if Alice speaks Arabic and Bob speaks Bulgarian, we can't force
them both to learn English in order to mask each other.

What does this imply for usability?  More so than before, users of anonymity
networks may need to choose their systems based on how usable others will
find them, in order to get the protection of a larger anonymity set.

* Case study: Usability means users, users mean security.

We'll consider an example.  Practical anonymity networks fell into two broad
classes. {\it High-latency} networks like Mixminion or XXX can resist very
strong attackers who can watch the whole network and control a large part of
the network infrastructure.  To prevent this ``global attacker'' from linking
senders to recipients by correlating when messages enter and leave the
system, high-latency networks introduce large delays into message delivery
times, and are thus only suitable for applications like email and bulk data
delivery---most users aren't willing to wait half an hour for their web pages
to load.  {\it Low-latency} networks like Tor or XXX, on the other hand, are
fast enough for web browsing, secure shell, and other interactive
applications, but have a weaker threat model: an attacker who watches or
controls both ends of a communication can trivially correlate message timing
and link the communicating parties.

Clearly, uses who need to resist strong adversaries need to choose
high-latency networks or nothing at all, and users who need to anonymize
interactive applications need to choose low-latency networks or nothing at
all.  But what should flexible users choose?  Against an unknown threat
model, with a non-interactive application (such as email), is it more secure
to choose security or usability?

Security, we might decide.  If the attacker turns out to be strong, then
we'll prefer the high-latency network, and if the attacker is weak, then the
extra protection doesn't hurt.

But suppose that, because of the inconvenience of the high-latency network,
it gets very few actual users---so few, in fact, that its maximum anonymity
set it too small for our needs.\footnote{This is
  hypothetical, but not wholly unreasonable.  The most popular high-latency
  network, FOO, has approximately BAR users, whereas the most popular
  commercial low-latency anonymity system, BAZ, advertises QUUX users.}
In this case, we need to pick the low-latency system, since the high-latency
system, though it always protects us, never protects us enough; whereas the
low-latency system can give us enough protection against at least {\it some}
adversaries.

* Case study: against options

Too often, designers faced with a security decision bow out, and instead
leave the choice as an option: protocol designers leave implementors to
decide, and implementors leave the choice for their users.  This can be bad
for security systems, and is nearly always bad for privacy systems.

With security:
\begin{tightlist}
\item Extra options often delegate decisions to those least able to
make them.  If the protocol designer can't decide whether to  XXX or XXX, how
is the user supposed to choose?
\item More choices mean more code, and more code is harder to audit.
\end{tightlist}

***********************************************************************
To unsubscribe, send an e-mail to majordomo@seul.org with
unsubscribe freehaven-cvs       in the body. http://freehaven.net/