[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

[freehaven-cvs] Merge changes from Lorrie, except: citing crowds for...



Update of /home/freehaven/cvsroot/doc/wupss04
In directory moria.mit.edu:/tmp/cvs-serv6958

Modified Files:
	usability.tex 
Log Message:
Merge changes from Lorrie, except: citing crowds for slogan; citing Freedom for who-knows-what-exactly.  Also {s/very/damn/; s/damn//;}

Index: usability.tex
===================================================================
RCS file: /home/freehaven/cvsroot/doc/wupss04/usability.tex,v
retrieving revision 1.13
retrieving revision 1.14
diff -u -d -r1.13 -r1.14
--- usability.tex	1 Nov 2004 23:13:16 -0000	1.13
+++ usability.tex	31 Dec 2004 17:33:40 -0000	1.14
@@ -21,16 +21,17 @@
 
 Other chapters in this book have talked about how usability impacts
 security. One class of security software is anonymizing networks---overlay
-networks on the Internet that let users transact (for
+networks on the Internet that provide privacy by letting users transact (for
 example, fetch a web page or send an email) without revealing their
 communication partners.
 
 In this chapter we're going to focus on the \emph{network effects} of
-usability on security: usability is a factor as before, but the size of the user
+usability on privacy and security: usability is a factor as before, but the
+size of the user
 base also becomes a factor.  Further, in anonymizing systems, even if you
 were smart enough and had enough time to use every system
-perfectly, you would \emph{nevertheless} be right to choose your system
-based in part on its usability for other users.
+perfectly, you would nevertheless be right to choose your system
+based in part on its usability for \emph{other} users.
 
 \section{Usability for others impacts your security}
 
@@ -64,7 +65,8 @@
 use is not only usable by yourself, but by the other participants as well.
 
 This doesn't mean that it's always better to choose usability over security,
-of course: if a system doesn't meet your threat model, no amount of usability
+of course: if a system doesn't address your threat model, no amount of
+usability
 can make it secure.  But conversely, if the people who need to use a system
 can't or won't use it correctly, its ideal security properties are
 irrelevant.
@@ -112,8 +114,9 @@
 said, but also who is
 communicating with whom, which users are using which websites, and so on.
 These systems have a broad range of users, including ordinary citizens
-who want to maintain their civil liberties, corporations who want to
-analyze their competitors, and government intelligence agencies who need
+who want to maintain their civil liberties, corporations who don't want
+to reveal information to
+their competitors, and government intelligence agencies who need
 to do operations on the Internet without being noticed.
 
 Anonymity networks work by hiding users among users.  An eavesdropper might
@@ -180,7 +183,7 @@
 extra protection doesn't hurt.
 
 But since many users might find the high-latency network inconvenient,
-suppose that it gets very few actual users---so few, in fact, that its
+suppose that it gets few actual users---so few, in fact, that its
 maximum anonymity set it too small for our needs.
 %  \footnote{This is
 %  hypothetical, but not wholly unreasonable.  The most popular high-latency
@@ -214,14 +217,14 @@
 \begin{tightlist}
 \item Extra options often delegate security decisions to those least
   able to understand what they imply. If the protocol designer can't
-  decide whether AES is better than
-  Twofish, how is the end user supposed to pick?
+  decide whether the AES encryption algorithm is better than
+  the Twofish encryption algorithm, how is the end user supposed to pick?
 \item Options make code harder to audit by increasing the volume of code, by
   increasing the number of possible configurations {\it exponentially}, and
-  by guaranteeing that non-default configurations will receive very little
+  by guaranteeing that non-default configurations will receive little
   testing in the field. If AES is always the default, even with several
   independent implementations of your protocol, how long will it take
-  to notice that the Twofish implementation is wrong?
+  to notice if the Twofish implementation is wrong?
 \end{tightlist}
 
 Most users stay with default configurations as long as they work,
@@ -253,11 +256,14 @@
 are many different possible configurations, eavesdroppers and insiders
 can often tell users apart by
 which settings they choose.  For example, the Type I or ``Cypherpunk''
-anonymizing network uses the OpenPGP message format, which supports many
-symmetric and asymmetric ciphers.  Because different users may prefer
-different ciphers, and because different versions of the PGP and GnuPG
-implementations of OpenPGP use different ciphersuites, users with uncommon
-preferences and versions stand out from the rest, and get very little privacy
+anonymizing network uses the OpenPGP encrypted message format, which supports
+many
+symmetric and asymmetric ciphers.  Because different users prefer
+different ciphers, and because different versions of encryption programs
+implementing
+OpenPGP (such as PGP and GnuPG)
+use different ciphersuites, users with uncommon
+preferences and versions stand out from the rest, and get little privacy
 at all.  Similarly, Type I allows users to pad their messages to a fixed size
 so that an eavesdropper can't correlate the sizes of messages passing through
 the network---but it forces the user to decide what size of padding to use!
@@ -269,8 +275,8 @@
 casual users, and therefore needs to prevail for security-conscious users
 {\it even when it would not otherwise be their best choice.}  For example,
 when an anonymizing network allows user-selected message latency (like
-Type I does), most users tend to use whichever setting is the default,
-so long as
+the Type I network does), most users tend to use whichever setting is the
+default, so long as
 it works.  Of the fraction of users who change the default at all, most will
 not, in fact, understand the security implications; and those few who do will
 need to decide whether the increased traffic-analysis resistance that comes
@@ -442,7 +448,7 @@
 
 Another area where human factors are critical in privacy is in bootstrapping
 new systems.  Since new systems start out with few users, they initially
-provide only very small anonymity sets.  This creates a dilemma: a new system
+provide only small anonymity sets.  This creates a dilemma: a new system
 with improved privacy properties will only attract users once they believe it
 is popular and therefore has high anonymity sets; but a system cannot be
 popular without attracting users.  New systems need users for privacy, but
@@ -509,10 +515,11 @@
 user transaction \cite{sybil}, but it might also trick
 users into thinking a given network is safer than it actually is.
 
-And finally, as we saw in the above discussion about JAP, it's hard to
-be able to guess how much a given other user is contributing to your
+And finally, as we saw when discussing JAP above, the feasibility of
+end-to-end attacks makes it hard to
+guess how much a given other user is contributing to your
 anonymity. Even if he's not actively trying to trick you, he can still
-fail to mix well with you, either because his behavior is sufficiently
+fail to provide cover for you, either because his behavior is sufficiently
 different from yours (he's active during the day, and you're active at
 night), because his transactions are different (he talks about physics,
 you talk about AIDS), or because network design parameters (such as
@@ -524,15 +531,15 @@
 
 \section{Bringing it all together}
 
-Users' safety relies on them behaving like other users. How do they
-predict the behavior of other users? If they need to behave in a way
+Users' safety relies on them behaving like other users.  But how can they
+predict other users' behavior? If they need to behave in a way
 that's different from the rest of the users, how do they compute the
 tradeoff and risks?
 
 There are several lessons we might take away from researching anonymity
 and usability. On the one hand, we might remark that anonymity is already
 tricky from a technical standpoint, and if we're required to get usability
-right as well before anybody can be safe, it will be very hard indeed
+right as well before anybody can be safe, it will be hard indeed
 to come up with a good design. That is, if lack of anonymity means lack
 of users, then we're stuck in a depressing loop. On the other hand, the
 loop has an optimistic side too. Good anonymity can mean more users: if we

***********************************************************************
To unsubscribe, send an e-mail to majordomo@xxxxxxxx with
unsubscribe freehaven-cvs       in the body. http://freehaven.net/