[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freehaven-cvs] cleanups in section 4



Update of /home/freehaven/cvsroot/doc/fc03
In directory moria.seul.org:/home/arma/work/freehaven/doc/fc03

Modified Files:
	econymics.pdf econymics.ps econymics.tex 
Log Message:
cleanups in section 4


Index: econymics.pdf
===================================================================
RCS file: /home/freehaven/cvsroot/doc/fc03/econymics.pdf,v
retrieving revision 1.2
retrieving revision 1.3
diff -u -d -r1.2 -r1.3
Binary files /tmp/cvsjyxckk and /tmp/cvs83nV84 differ

Index: econymics.ps
===================================================================
RCS file: /home/freehaven/cvsroot/doc/fc03/econymics.ps,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -d -r1.1 -r1.2
--- econymics.ps	16 Sep 2002 08:29:53 -0000	1.1
+++ econymics.ps	16 Sep 2002 23:07:59 -0000	1.2
@@ -2,14 +2,14 @@
 %%Creator: dvipsk 5.86 p1.5d Copyright 1996-2001 ASCII Corp.(www-ptex@ascii.co.jp)
 %%based on dvipsk 5.86 Copyright 1999 Radical Eye Software (www.radicaleye.com)
 %%Title: econymics.dvi
-%%Pages: 18
+%%Pages: 21
 %%PageOrder: Ascend
 %%BoundingBox: 0 0 612 792
 %%EndComments
 %DVIPSWebPage: (www.radicaleye.com)
 %DVIPSCommandLine: dvips -o econymics.ps econymics.dvi
 %DVIPSParameters: dpi=600, compressed
[...3987 lines suppressed...]
+b(Andrei) 22 b(Serjan) n(to) n(v,) i(Roger) g(Dingledine,) g(and) f(P) n
+(aul) h(Syv) n(erson.) 30 b(F) -6 b(rom) 22 b(a) i(tric) n(kle) g(to) f
+(a) h(\015o) r(o) r(d:) 663 4924 y(Activ) n(e) d(attac) n(ks) h(on) g
+(sev) n(eral) g(mix) e(t) n(yp) r(es.) 28 b(In) 21 b(F) -6
+b(abien) 22 b(P) n(etitcolas,) h(editor,) p Fa 23 w(Information) h
+(Hid-) p Fs 1922 5173 a(20) p 90 rotate dyy eop
+%%Page: 21 21
+21 20 bop Fa 663 448 a(ing,) 23 b(5th) i(International) g(Workshop) g
+(\(IH) f(2002\)) p Fp(.) f(Springer-V) -6 b(erlag,) 22
+b(LNCS) g(\(forthcoming\),) 663 540 y(2002.) 36 b(h) n
+(ttp://www.freeha) n(v) n(en.net/pap) r(ers.h) n(tml.) 523
+631 y(23.) 43 b(Bryce) 27 b(Wilco) n(x-O'Hearn.) 39 b(Exp) r(eriences)
+27 b(Deplo) n(ying) h(a) f(Large-Scale) h(Emergen) n(t) f(Net) n(w) n
+(ork.) 663 722 y(In) p Fa 24 w(1st) g(International) h(Pe) l(er) g(T) -6
+b(o) 27 b(Pe) l(er) h(Systems) g(Workshop) g(\(IPTPS) f(2002\)) p
+Fp(,) f(Marc) n(h) f(2002.) p Fs 1922 5173 a(21) p 90 rotate
+dyy eop
 %%Trailer
 end
 userdict /end-hook known{end-hook}if

Index: econymics.tex
===================================================================
RCS file: /home/freehaven/cvsroot/doc/fc03/econymics.tex,v
retrieving revision 1.27
retrieving revision 1.28
diff -u -d -r1.27 -r1.28
--- econymics.tex	16 Sep 2002 21:23:35 -0000	1.27
+++ econymics.tex	16 Sep 2002 23:07:59 -0000	1.28
@@ -149,9 +149,10 @@
 such commercial proxies are forced to trust them to protect traffic
 information.  Many users, particularly large organizations, are rightly
 hesitant to use an anonymity infrastructure they do not control.  However,
-running one's own system won't work: a system that carries traffic for
-only one organization provides little protection --- it must carry traffic
-from others to provide cover. %Yet those others don't want to trust their
+running one's own system won't work: a system that carries traffic
+for only one organization cannot protect that organization from being
+identified. Nodes must carry traffic from others to provide cover.
+%Yet those others don't want to trust their
 %traffic to a single entity either.
 The only viable solution is to distribute trust. Each party runs a node
 in a shared \emph{strong anonymity} infrastructure, if its incentives
@@ -207,7 +208,7 @@
 In this section and those that follow, we formalize the economic analysis
 of why people might choose to send messages through mix-nets. Here we
 discuss the incentives for the agents to participate either as senders
-or also as nodes, and we start proposing a general framework for the
+or also as nodes, and we propose a general framework for their
 analysis. In the next section we consider various applications of our
 framework.
 
@@ -274,6 +275,7 @@
 assumption that the honest node is interested in its own anonymity) is
 strongly positively correlated to preserving the anonymity of one's
 information. For example, suppose agents send messages at regular intervals
+%FIXME kill this example. it's bad news.
 (no more than one message per agent is sent to any incoming node at a time),
 that the probability of any node being compromised is $0.1$, and that
 messages pass through three nodes before exiting the network. Assume that
@@ -294,14 +296,13 @@
 of sending messages and adversary passivity. Nonetheless, it should be clear
 that there is a large potential gain from running one's own node.}
 
-\item  The relation between the number of (other) nodes and the probability
+\item  The relation between the number of nodes and the probability
 of remaining anonymous might not be monotonic. At parity of traffic,
 sensitive agents might want fewer nodes in order to maintain high anonymity
-sets. In particular, if no dishonest nodes exist, everybody should prefer a
-small number of nodes. But if some nodes are dishonest, users may prefer
+sets. But if some nodes are dishonest, users may prefer
 more honest nodes (to increase the chance that messages go through honest
-nodes). Agents that act as nodes may have less desire for more nodes,
-because they want to maintain high anonymity sets at their particular node.
+nodes). Agents that act as nodes may prefer fewer nodes,
+to maintain high anonymity sets at their particular node.
 Hence the probability of remaining anonymous is inversely related to the
 number of nodes but positively related to the ratio of honest/dishonest
 nodes.
@@ -311,15 +312,15 @@
 the level of reliability in the system is then an inverse function of the
 share of dishonest nodes in the system, $n_{d}/n_{h}$.
 
-\item  Benefits of acting as a node (nodes might be retributed for
+\item  Benefits of acting as a node (nodes might be rewarded for
 forwarding traffic or for creating dummy traffic), $b_{h}$.
 
-\item  Benefits of acting as a dishonest node (dishonest nodes might benefit
-from disrupting service or might make use of the information that passes
+\item  Benefits of acting as a dishonest node (dishonest nodes might
+benefit from disrupting service or by using the information that passes
 through them), $b_{d}$.
 \end{enumerate}
 
-The possible costs can be enumerated as follows:
+The possible costs include:
 
 \begin{enumerate}
 \item  Costs of using the system by:
@@ -342,43 +343,38 @@
 
 \item  or through a conventional non-anonymous system, $c_{n}$.
 
-Perception of the delay caused by using the mix-net system can be reflected
-in the difference of $c_{s}$ and $c_{n}$.
+The difference of $c_{s}$ and $c_{n}$ reflects the delay caused by using
+the mix-net system.
 \end{itemize}
 
 \item  receiving dummy traffic, $c_{r}$.
 \end{itemize}
 
 \item  Costs of acting as an honest node, $c_{h}$, by receiving and
-forwarding traffic, creating dummy traffic, and being an exit node (which
-involves potential exposure to liabilities or abuses). There are both fixed
-and variable costs of being a node. The fixed costs are related to the
-investments necessary to setup the software. The variable costs are
-dominated by the costs of traffic passing through the node.
+forwarding traffic, creating dummy traffic, or being an exit node (which
+involves potential exposure to liability from abuses). There are both
+fixed and variable costs of being a node. The fixed costs are related
+to the investments necessary to setup the software. The variable costs
+are dominated by the costs of traffic passing through the node.
 
-\item  Costs of acting as dishonest node, $c_{d}$ (e.g., being exposed as a
-dishonest node carries a monetary penalty).
+\item  Costs of acting as dishonest node, $c_{d}$ (being exposed as a
+dishonest node may carry a monetary penalty).
 \end{enumerate}
 
 In addition to the above costs and benefits, there might also be \emph{%
-reputation} costs and benefits from using the system to send messages (e.g.,
-there can be a reputation cost of being exposed as a sender of anonymous
-messages even though the messages themselves do remain anonymous), acting as
-a perceivably honest node (e.g., there can be a reputation benefit by acting
-as a reliable node), or acting as a perceivably dishonest node (e.g., there
-can be a reputation cost by being exposed as a dishonest node; the costs
-here will also be a function of the probability of being exposed as a bad
-node).
+reputation} costs and benefits from: being observed to send or receive
+anonymous messages, being perceived to act as a reliable node, and being
+thought to act as a dishonest node.
 
 Some of these reputation costs and benefits can be modeled endogenously (for
 example, being perceived as a honest node brings that node more traffic, and
 therefore more possibilities to hide that node's messages; similarly, being
-perceived as a dishonest node might bring traffic away from that node). This
-way they would not enter directly the utility functions of the agents, but
-rather enter indirectly through the changes they provoke in the behavior of
-the agents. In other cases, reputation costs and benefits might be valued
-per se. While we do not consider this option in the simplified model below,
-we later comment on the impact that reputation effects can have on the model.
+perceived as a dishonest node might bring traffic away from that node).
+They would enter the utility functions only indirectly through the
+changes they provoke in the behavior of the agents. In other cases,
+reputation costs and benefits might be valued per se. While we do not
+consider this option in the simplified model below, we later comment on
+the impact that reputation effects can have on the model.
 
 We assume that agents want to maximize their expected utility, which is a
 function of expected benefits minus expected costs. We represent the payoff
@@ -397,7 +393,7 @@
 \right)
 \end{equation*}
 
-where $u, \theta, \gamma$, and $\partial$ are unspecified functional forms.
+\noindent where $u, \theta, \gamma$, and $\partial$ are unspecified functional forms.
 The payoff function $u$ includes the costs and benefits for all the possible
 actions of the agents, including \textit{not} using the mix-net and instead
 sending the messages through a non-anonymous channel. We can represent these
@@ -407,18 +403,18 @@
 a^{s,d,r,h}$ will be zero too, and the only cost in the function will be $%
 c_{n}$.} Note that $\gamma $ and $\partial$ describe the probability of a
 message being delivered and a message remaining anonymous, respectively.
-These probabilities are weighted with the values $v_{r,a}$ because different
+We weight these probabilities with the values $v_{r,a}$ because different
 agents might value anonymity and reliability differently, and because in
 different scenarios anonymity and reliability for the same agent might have
 different impacts on her payoff.
 
-While messages might be sent anonymously to avoid costs or to gain profits,
-the costs and benefits from sending the message might be distinct from the
+Note also that the
+costs and benefits from sending the message might be distinct from the
 costs and benefits from keeping the \emph{information} anonymous. For
-example, when Alice anonymously contacts a merchant to purchase a book, she
-will gain a profit equal to the difference between her valuation of the book
+example, when Alice anonymously purchases a book, she
+gains a profit equal to the difference between her valuation of the book
 and its price. But if her anonymity is compromised during the process, she
-might incur losses completely independent from the price of the book or her
+incurs losses completely independent from the price of the book or her
 valuation of it. The payoff function $u_{i}$ above allows us to represent
 the duality implicit in all privacy issues, as well as the distinction
 between the value of sending a message and the value of keeping it anonymous:
@@ -452,8 +448,8 @@
 \end{tabular}
 \end{equation*}
 
-In what follows we always assume that the agent has an incentive to send a
-message (in order to gain profits or avoid losses) as well as to keep it
+Henceforth, we always assume that the agent has an incentive to send a
+message as well as to keep it
 anonymous. We also always consider the direct benefits or losses rather than
 their dual opportunity costs or avoided costs. Nevertheless, the above
 representation allows us to formalize the various possible combinations.
@@ -462,18 +458,16 @@
 anonymity must be protected in order to avoid losses, then $v_{r}$ will be
 positive while $v_{a}$ will be negative and $p_{a}$ will enter the payoff
 function as $\left( 1-p_{a}\right) $.\footnote{%
-In such scenario, being certain of staying anonymous would therefore
+In such a scenario, being certain of staying anonymous would therefore
 eliminate the risk of $v_{a}$, while being certain of losing anonymity would
 impose on the agent the full cost $v_{a}$.} On the other side, if the agent
 must send a certain message to avoid some losses but anonymity ensures her
 some benefits, then $v_{r}$ will be negative and $p_{r}$ will enter the
 payoff function as $\left( 1-p_{r}\right) $, while $v_{a}$ will be positive.%
-\footnote{%
-Similarly, guaranteed delivery will eliminate the risk of losing $v_{r}$,
-while certainty of delivery failure would impose on the agent the full cost $%
-v_{r}$.}
+\footnote{Similarly, guaranteed delivery will eliminate the risk of
+losing $v_{r}$, while delivery failure will impose the full cost $v_{r}$.}
 
-With this framework we are able to compare, for example, the losses due to
+With this framework we can compare, for example, the losses due to
 compromised anonymity to the costs of protecting it. An agent will decide to
 protect herself by spending a certain amount if the amount spent in defense
 plus the expected losses for losing anonymity after the investment are less
@@ -486,7 +480,7 @@
 In this section we apply the above framework to simple scenarios. We make a
 number of assumptions to let us model the behavior of the participants as
 players in a repeated-game, simultaneous-move game theoretical framework.
-Thus we are able to analyze the economic justifications for the various
+Thus we can analyze the economic justifications for the various
 choices of the participants, and compare design approaches to mix-net
 systems.
 
@@ -496,13 +490,13 @@
 messages. Each user has three options: only send her own messages through
 the mix-net; send her messages but also act as a node forwarding messages
 from other users; or don't use the system at all (by sending a message
-without using the mix-net, or by not sending the message at all). Thus
+without anonymity, or by not sending the message at all). Thus
 initially we do not consider the strategy of choosing to be a bad node, or
 additional honest strategies like creating and receiving dummy traffic. We
-represent the game as a simultaneous-move, repeated-game because of the
-large number of participants, plus the fact that earlier actions indicate
-only a weak commitment to future actions. With a large group size there
-might be no discernable nor agreeable order for the actions of all
+represent the game as a simultaneous-move, repeated game because of the
+large number of participants, and because earlier actions indicate
+only a weak commitment to future actions. A large group
+will have no discernable or agreeable order for the actions of all
 participants, so actions can be considered simultaneous. The limited
 commitment produced by earlier actions allow us to consider a repeated-game
 scenario.\footnote{%
@@ -511,20 +505,15 @@
 %Roger, is this the case or not? ie are traffic related costs the highest ones? 
 These two considerations suggest against using a sequential approach of the
 Stackelberg type.\cite[Ch. 3]{fudenberg-tirole-91} For similar reasons we
-also avoid a ``war of attrition/bargaining model'' framework.\footnote{%
-Wars of attrition and bargaining games (see for example \cite{rubinstein-82}%
-) are timing games where the relative impatience of players plays an
-important role. We have seen in the previous Section and we will confirm
-again below that agents with high sensitivity to anonymity actually have an
-interest in being among the (first and few) nodes in the system. 
-%Hence a timing game
-%approach does not seem appropriate in our scenario.
-}
+also avoid a ``war of attrition/bargaining model'' framework (timing games
+(see for example \cite{rubinstein-82}) where the relative impatience
+of players plays an important role).
 
 \subsection{Adversary}
 
-Strategic agents cannot choose to be bad nodes in this simplified scenario.
-But we do assume there is a percentage of bad nodes and that agents respond
+Although strategic agents cannot choose to be bad nodes in this simplified
+scenario, we still assume there is a percentage of bad nodes and that
+agents respond
 to this possibility. Specifically we assume a global passive adversary (GPA)
 that can observe all traffic on all links (between users and nodes, between
 nodes, and between nodes or users and recipients). Additionally, we also
@@ -538,51 +527,53 @@
 and links never selectively trickle or flood messages \cite{trickle02}.
 Nonetheless, a \emph{global} passive adversary is still quite strong, and
 thus a typical starting point of anonymity analyses.
+% FIXME say more
 
 \subsection{Honest agents}
 
-If a user only sends her messages, the cost of using the anonymous service
+If a user only sends messages, the cost of using the anonymous service
 is $c_{s}$. This cost might be higher than using the non anonymous channel, $%
 c_{n}$, because of usage fees, usage hassles, or delays. To keep things
-simple, we assume that all messages pass through the mix-net system in fixed
-length free routes, so that we can write $c_{s}$ as a fixed value, the same
+simple, we assume that all messages pass through the mix-net in fixed-length
+free routes, so that we can write $c_{s}$ as a fixed value, the same
 for all agents. Users send messages at the same time, and only one message
-at a time. We also assume that routes are chosen randomly by users, so that
+at a time. (We also assume that routes are chosen randomly by users, so that
 traffic is uniformly distributed among the nodes.\footnote{%
 Reputation considerations might alter this point. We comment on this in
-Section \ref{sec:alternate-incentives}.} If a user decides to be a node,
-costs increase with the traffic; we focus here on the traffic-based variable
-costs. We also assume that all agents know the number of agents using the
+Section \ref{sec:alternate-incentives}.}
+
+If a user decides to be a node, her costs increase with the volume of
+traffic (we focus here on the traffic-based variable costs). We also
+assume that all agents know the number of agents using the
 system and the number of them acting as nodes, and that each specific
-agent's actions are observable. We also assume that all agents perceive the
-level of anonymity in the system (based on traffic and number of nodes) the
-same way. Further, we imagine that agents use the system because they want
+agent's actions are observable. We also assume that all agents perceive the same
+level of anonymity in the system based on traffic and number of nodes.
+Finally, we imagine that agents use the system because they want
 to avoid potential losses from not being anonymous. This sensitivity to
-anonymity can be represented with continuous variable $v_{i}=\left[ \text{\b{%
+anonymity can be represented with the continuous variable $v_{i}=\left[ \text{\b{%
 v}},\bar{v}\right] $. In other words, we initially focus on the goal of
-remaning anonymous given an adversary that can control other nodes or snif
-all communications. We later comment on the addition reliability issues. 
+remaning anonymous given an adversary that can control some nodes and
+observe all communications. We later comment on the addition reliability
+issues.
 
-These assumptions let us reformulate the framework above in a simpler way.
-The utility function can be re-written as:
+These assumptions let us reduce the utility function to:
 
 \begin{equation*}
 u_{i}=-v_{i}\left( 1-p_{a}\left( n_{s},n_{h},n_{d},a_{i}^{h}\right) \right)
 -c_{s}a_{i}^{s}-c_{h}\left( n_{s},n_{h},n_{d}\right) a_{i}^{h}-c_{n}
 \end{equation*}
 
-For Thus each agent $i$ tries to \textit{minimize} the costs of sending
-messages and the risk of being tracked. $1-p_{a}\left(
-n_{s},n_{h},n_{d},a_{i}^{h}\right) $ is the probability that anonymity will
-be lost given the number of agents sending messages, the number of them
-acting as honest and dishonest nodes, and the action $a$ of agent $i$
-itself. $v_{i}$ is the disutility an agent derives from its message being
-exposed. $c_{s},c_{h}\left( n_{s},n_{h},n_{d}\right) ,$ and $c_{n}$ are the
-costs of sending a message through the mix-net system, acting as a node when
-there are $n_{s}$ agents sending messages over $n_{h}$ and $n_{d}$ nodes,
-and sending messages through a non-anonymous system, respectively. Each
-period, the rational agent can compare the disutility coming from each of
-these three one-period strategies.
+Thus each agent $i$ tries to \textit{minimize} the costs of sending
+messages and the risk of being tracked. The first component is the
+probability that anonymity will be lost given the number of agents sending
+messages, the number of them acting as honest and dishonest nodes, and
+the action $a$ of agent $i$ itself. This chance is weighted by $v_{i}$,
+the disutility an agent derives from its message being exposed. We also
+include the costs of sending a message through the mix-net system, acting
+as a node when there are $n_{s}$ agents sending messages over $n_{h}$
+and $n_{d}$ nodes, and sending messages through a non-anonymous system,
+respectively. Each period, a rational agent can compare the utility
+coming from each of these three one-period strategies.
 
 \begin{equation*}
 \begin{tabular}{cc}
@@ -597,10 +588,11 @@
 
 We do not explicitly allow the agent to choose \textit{not} to send a
 message at all, which would of course minimize the risk of anonymity
-compromise. %Rather, she can only choose amongst the three given actions. 
+compromise.
 Also, we do not explicitly report the value of sending a successful message.
-Both are simplifications that do not alter the rest of the analysis. We
-could in fact have inserted an action $a^{0}$ with a certain disutility from
+Both are simplifications that do not alter the rest of the analysis.
+%FIXME following sentence is huge
+We could in fact have inserted an action $a^{0}$ with a certain disutility from
 not sending any message, and solve the problem of minimizing the expected
 losses; or, we could have inserted in the payoff function for actions $%
 a^{s,h,n}$ also the utility of sending a successful message compared to not
@@ -610,26 +602,19 @@
 non-anonymously, or not sending it at all, depending on which option
 maximizes the expected benefits or minimizes the expected losses.
 Thereafter, we can simply compare the two other actions (being a user, or
-being also a node) to the locally optimal exit strategy. %\footnote{%
-%For example, sending an anonymous message might be so expensive, and sending
-%it through a non anonymous channel so potentially costly, that the user
-%might prefer not to send a message at all. We discuss again some more
-%general issues related to this point in one of the later sections [[add
-%reference here]].} 
-%[[Go back to this in later sections, discuss the ``why bother having anonymity'' question.]]
+being also a node) to the locally optimal exit strategy.
 
 While this model is simple, it allows us to highlight some of the dynamics
 that might take place in the decision process of agents willing to use a
-mix-net. We now consider various versions of this\ model.
+mix-net. We now consider various versions of this model.
 
 \subsubsection{Myopic Agents}
 
-Myopic agents do not take into consideration the strategic consequences of
-their actions. They simply consider the status of the network and, depending
-on the payoffs of the one-period game, adopt a certain strategy. Imagine a
-new agent with a privacy sensitivity $v_{i}$ is considering using a mix-net
-where currently $n_{s}=\bar{n}_{s}$ and $n_{h}=\bar{n}_{h}$, that is, there
-are already $\bar{n}_{s} $ users and $\bar{n}_{h}$ nodes.
+Myopic agents do not consider the long-term consequences of their
+actions. They simply consider the status of the network and, depending
+on the payoffs of the one-period game, adopt a certain strategy. Suppose
+that a new agent with a privacy sensitivity $v_{i}$ is considering using
+a mix-net with $\bar{n}_{s}$ users and $\bar{n}_{h}$ nodes.
 
 Then if 
 \begin{gather*}
@@ -672,29 +657,29 @@
 Furthermore, the actual level of anonymity will depend on the mix-net
 protocol and topology (cascade-based or synchronous networks will provide
 larger anonymity sets than asynchronous networks where traffic is divided
-among the nodes). Nevertheless we can highlight the economic rationale
+among the nodes).
+
+Nevertheless we can highlight the economic rationale
 implicit in the above equation. In the first comparison agent $i$ is
 comparing the contribution to her own anonymity of acting as a node to the
 costs of doing so. Acting as a node dramatically increases anonymity, but it
 will also bring more traffic-related costs to the agent. Agents with high
 privacy sensitivity (high $v_{i}$) will clearly be more likely to accept the
 trade-off and become nodes.
+%FIXME emphasize this more
 
 \subsubsection{Strategic Agents: Simple Case}
 
-Strategic agents take into consideration the fact that their action will
-trigger responses by the other agents in the system.
+Strategic agents take into consideration the fact that their actions will
+trigger responses from the other agents.
 
-We start from a simplified scenario where we consider only one-on-one
-interactions. The interactions we have in mix-net systems obviously involve
-a much larger number of players, but the following analysis can give us a
-starting point to consider the issues to be considered when strategic agents
-are interacting. Initially we study the case where each agent knows the
+We start by considering only one-on-one
+interactions. First we study the case where each agent knows the
 other agent's type, but we then extend this case to study what happens when
 there is uncertainty about the other agents' types.
 
-We can consider agent $i$ and agent $j$. Each agent will have to consider
-the other agent's reaction function in her decision process. Let:
+Suppose that each of agent $i$ and agent $j$ considers the other agent's
+reaction function in her decision process. Let:
 
 \begin{equation*}
 A_{w}=-v_{w}\left( 1-p_{a}\left( \bar{n}_{s}+2,\bar{n}_{h}+2,n_{d},a_{w}^{h}%
@@ -744,15 +729,18 @@
 As before, each agent has a trade-off between the cost of traffic and the
 benefit of traffic when being a node, and a trade-off between having more
 nodes and less nodes. In addition to the previous analysis, now the final
-outcome will also depend on how much each player knows about whether the
-other is honest or not, and how much he knows about the other player's
+outcome also depends on how much each player knows about whether the
+other is honest or not, and how much she knows about the other player's
 sensitivity to privacy. %extend
-When $v_{i}>>v_{j}$ then equilibrium with free-riding can be sustained: the
+When $v_{i} \gg v_{j}$ then equilibrium with free-riding can be sustained: the
 problem can be mapped to \cite{palfrey-rosenthal-89}. 
 %show proof with prob. distribition here, simplygfiy
-Also when the other agent's type is unknown the system can have equilibria
-with free-riding, under certain probability distribution over other player's
-type. This can be proved again following \cite{palfrey-rosenthal-89}.
+The system can have equilibria with free-riding even when the other
+agent's type is unknown, under certain probability distributions
+over the other player's type. This can be proved again following
+\cite{palfrey-rosenthal-89}.
+% FIXME which probability distributions? are they easy or hard,
+% likely or unlikely?
 
 \subsubsection{Strategic Agents: Multi-player Case}
 
@@ -771,22 +759,22 @@
 everybody would have to resort to non anonymous channels. A trigger strategy
 would punish an agent by making the system unavailable. Of course a high
 sensitivity user will also suffer itself because of this strategy [[extend
-on this]]} This can be seen as a public good with free-riding type of
+on this]]} We can consider this to be a public good with free-riding type of
 problem \cite{cornes-sandler-86}. Under which conditions will this not
 happen?
 
 One of the interesting economic aspects of this scenario is that the highly
-sensitive agents \textit{do} want some level of free-riding, from the less
-sensitive types that will provide traffic and therefore noise. On the other
-side, they might not want too much free-riding - for example from highly
-sensitive type pretending to be agents with low sensitivity - if this
-involves too high traffic costs. This latter point however must be
-specified: highly anonymity sensitive types, at parity of traffic, prefer to
-be a node (because anonymity and reliability will increase) and prefer to
+sensitive agents actually \emph{want} some level of free-riding, to
+provide noise. On the other
+side, they do not want too much free-riding --- for example from highly
+sensitive type pretending to be agents with low sensitivity --- if it
+involves high traffic costs. This latter point however must be
+clarified: highly anonymity sensitive types, at parity of traffic, prefer to
+be nodes (because anonymity and reliability will increase) and prefer to
 work in systems with fewer nodes (otherwise traffic gets too dispersed and
 the anonymity sets get too small). So, if $-v_{i}-c_{n}$ is particularly
 high, i.e. if the cost of not having anonymity is very high for the most
-sensitive agents, then the latter might decide to act as node regardless of
+sensitive agents, then the latter might decide to act as nodes regardless of
 what the others do. %{extend}
 Also, if there are enough agents with lower $v_{i}$, again a ``high'' type
 might have an interest in acting alone if its costs of not having anonymity
@@ -796,15 +784,15 @@
 incur all the costs and be the only nodes in the system.
 
 In fact, when the valuations are continously distributed this is likely to
-create equilibria where the agents with the highest evaluations $v_{i}$ will
-become nodes, and the others, starting with the ``marginal'' type, will
+create equilibria where the agents with the highest evaluations $v_{i}$
+become nodes, and the others, starting with the ``marginal'' type,
 provide traffic. This problem can be mapped to the solution in \cite
 {bergstrom-blume--varian-86}. At that point an equilibrium level of
 free-riding might be reached. This condition can be also compared to \cite
 {grossman-stiglitz-80}, where the paradox of informationally efficient
 markets is described.\footnote{%
-The equilibrium in \cite{grossman-stiglitz-80} relies in fact on the
-``marginal'' agent which is indifferent between getting more information
+The equilibrium in \cite{grossman-stiglitz-80} relies on the
+``marginal'' agent who is indifferent between getting more information
 about the market and not getting it. We are grateful to Hal Varian for
 highlighting this for us.}
 
@@ -978,7 +966,7 @@
 Another potential solution, a global PKI to ensure unique identities, is
 unlikely to emerge any time soon.
 
-\subsubsection{Why lazy nodes are more likely than flat-out dishonest nodes}
+\subsubsection{Why lazy nodes are more likely than dishonest nodes.}
 
 On the other hand, when we consider strategic dishonest nodes we must also
 analyze their motivations as rational agents. A flat-out dishonest agent
@@ -1095,6 +1083,51 @@
 %to meet all of these objectives sufficiently to create viable systems.
 
 \section{Conclusions and Future Work}
+
+We have described a basic model for characterizing and analyzing the
+various incentives for participants to act either as senders or as
+nodes in strong anonymity infrastructures. In particular, what we
+tried to achieve in this paper is a framework to interpret anonymity
+from an economic perspective. We have applied this framework to a
+number of simplified scenarios. The trade-off between simplicity and
+realism must be considered when evaluating our results, which consist
+in highlighting some trends in the dynamics of the decision process
+for agents interested in using anonymous systems. Some of these trends
+can be summarized as follows: there can be an optimal level of
+free-riding in anonymous mix-net systems, because there exist
+conditions under which agents with high sensitivity to anonymity will
+decide to incur the costs of offering the service to others in order
+to protect their own anonymity. However, we have discussed how the
+deployment of a completely distributed system might involve
+coordination costs which make it unfeasible. In addition, we have
+discussed how systems of this type rely on the presence of a vast
+amount of simple users (low sensitive types) producing traffic and
+noise. The analysis therefore highlights that attracting the types
+with low sensitivity is essential to the success of a mix system. This
+involves dealing with the possible myopism (or flat-out disinterest)
+of low sensitive types in the area of anonymity protection. It appears
+therefore that a hybrid solution involving distributed trusted mixes,
+supported through entry-fees paid to a central authority and
+redistributed to the nodes, could be among the most interesting
+options. If certain nodes could be trusted thanks to their reputation,
+highly sensitive agents might will have an interest in supporting
+them. We can note here that the benefit discussed above coming from
+being a node could be transferred to another entity the agent trust,
+discounted by the trust level the agent places in that entity. Agents
+with lower sensitivity would be allowed to use the system for free.
+s mechanism should be implemented either but controlling the use by
+each agent of the system, so that for example free users might not
+send more than certain messages during a certain span of time, or by
+inserting masked costs such as delays for the free users (see, again,
+Anonymizer.com).
+In other words, what we have highlighted is that there are economic
+reasons for distributed trust and, under certain conditions,
+distributed trust could be an equilibrium in the system. In real life
+applications, however, it is likely that coordination costs will be so
+high that an hybrid solution like the one discussed above will be the
+best way to obtain the benefits agents want from the system.
+
+Alternate conclusion paragraph:
 
 We have described a basic model for characterizing and analyzing the various
 incentives for participants to act either as senders or as nodes in strong

***********************************************************************
To unsubscribe, send an e-mail to majordomo@seul.org with
unsubscribe freehaven-cvs       in the body. http://freehaven.net/