[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freehaven-cvs] Correcte notation in Sections 3 and 4, general clean...



Update of /home/freehaven/cvsroot/doc/fc03
In directory moria.mit.edu:/home/acquisti/work/freehaven/doc/fc03

Modified Files:
	econymics.tex 
Log Message:
Correcte notation in Sections 3 and 4, general cleanup in Sections 3 and 4, a few typos corrected elsewhere.


Index: econymics.tex
===================================================================
RCS file: /home/freehaven/cvsroot/doc/fc03/econymics.tex,v
retrieving revision 1.43
retrieving revision 1.44
diff -u -d -r1.43 -r1.44
--- econymics.tex	15 Dec 2002 09:02:44 -0000	1.43
+++ econymics.tex	16 Dec 2002 02:54:37 -0000	1.44
@@ -210,10 +210,11 @@
 
 In this section and those that follow, we formalize the economic
 analysis of why people might choose to send messages through
-mix-nets\footnote{Mixes were introduced by David Chaum. A mix takes in
+mix-nets.\footnote{Mixes were introduced by David Chaum. A mix
+takes in
   a batch of messages, changes their appearance, and sends them out
   in a new order, thus obscuring the relation of incoming to outgoing
-  messages.}.
+  messages.}
 Here we
 discuss the incentives for the agents to participate either as senders
 or also as nodes, and we propose a general framework for their
@@ -227,55 +228,59 @@
 not having their messages tracked. Different agents might value
 anonymity differently.
 
-Each agent $i$ bases her strategy on the following possible actions:
+Each agent $i$ (where $i=(1,...,N)$ and $N$ is the number of
+potential participants to the mix-net) bases her strategy on the
+following possible actions $a_{i}$:
 
 \begin{enumerate}
 \item  Act as a user of the system, specifically by sending (and
-receiving) her own traffic over the system, $a^s$, and/or agreeing to
-receive dummy traffic through the system, $a^r$.
+receiving) her own traffic over the system, $a_{i}^s$, and/or
+agreeing to receive dummy traffic through the system, $a_{i}^r$.
 
-\item  Act as an honest node, $a^{h}$, by receiving and forwarding
-traffic (and possibly acting as an exit node), keeping messages secret,
-and possibly creating dummy traffic.
+\item  Act as an honest node, $a_{i}^{h}$, by receiving and
+forwarding traffic (and possibly acting as an exit node), keeping
+messages secret, and possibly creating dummy traffic.
 
-\item  Act as dishonest node, $a^{d}$, by pretending to forward traffic
-but not doing so, by pretending to create dummy traffic but not doing
-so (or sending dummy traffic easily recognizable as such), or by
-eavesdropping traffic to compromise the anonymity of the system.
+\item  Act as dishonest node, $a_{i}^{d}$, by pretending to
+forward traffic but not doing so, by pretending to create dummy
+traffic but not doing so (or sending dummy traffic easily
+recognizable as such), or by eavesdropping traffic to compromise
+the anonymity of the system.
 
-\item  Send messages through conventional non-anonymous channels, $a_{n}$,
-%FIXME should this be a^n ?
-or send no messages at all.
+\item  Send messages through conventional non-anonymous channels,
+$a_{i}^{n}$, or send no messages at all.
 \end{enumerate}
 
-For each complete strategy profile $s=\left(s_{1},...,s_{n}\right)$, each
-agent receives a von Neumann-Morgenstern utility $u_{i}(s)$.
-The utility comes from a variety of benefits and costs. The
-benefits include:
+Various benefits and costs are associated to each agent's action
+and the simultaneous actions of the other agents. The benefits
+include:
 
 \begin{enumerate}
-\item  Benefits from sending messages anonymously. We model them as a function
-of the subjective value the agent places on the information
-successfully arriving at its destination, $v_{r}$; the subjective value of
-keeping her identity anonymous, $v_{a}$; the perceived level of
-anonymity in the system, $p_{a}$ (the probability that the sender and message
-will remain anonymous); and the perceived level of reliability in the
-system, $p_{r}$ (the probability that the message will be delivered). The
-subjective value of maintaining anonymity could be related
-to the profits the agent expects to make by keeping that information
-anonymous, or the losses the agents expects to avoid by keeping that
-information anonymous. We represent the level of anonymity in the system
-as a function of the traffic (number of agents sending messages in the
-system, $n_{s}$), the number of nodes (number of agents acting as honest
-nodes, $n_{h}$ and as dishonest nodes, $n_{d}$), and the decisions of the
-agent. We assume that this function maps these factors into a probability
-space, $p$.\footnote{%
-Information theoretic anonymity metrics \cite{Diaz02,Serj02} probably
-provide better measures of anonymity: such work shows how the level
-of anonymity achieved by an agent in a mix-net system is associated
-to the particular structure of the system. But probabilities are more
-tractable in our analysis, as well as better than the common ``anonymity
-set'' representation.} In particular:
+\item  Benefits from sending messages anonymously. We model them
+as a function of the subjective value each agent $i$ places on the
+information successfully arriving at its destination, $v_{r_{i}}$;
+the subjective value of keeping her identity anonymous,
+$v_{a_{i}}$; the perceived level of anonymity in the system,
+$p_{a}$ (the probability that the sender and message will remain
+anonymous); and the perceived level of reliability in the system,
+$p_{r}$ (the probability that the message will be delivered). The
+subjective value of maintaining anonymity could be related to the
+profits the agent expects to make by keeping that information
+anonymous, or the losses the agents expects to avoid by keeping
+that information anonymous. We represent the level of anonymity in
+the system as a function of the traffic (number of agents sending
+messages in the system, $n_{s}$), the number of nodes (number of
+agents acting as honest nodes, $n_{h}$, and as dishonest nodes,
+$n_{d}$), and the decisions of the agent. We assume that this
+function maps these factors into a probability
+measure $p\in \left[ 0,1\right] $.\footnote{%
+Information theoretic anonymity metrics \cite{Diaz02,Serj02}
+probably provide better measures of anonymity: such work shows how
+the level of anonymity achieved by an agent in a mix-net system is
+associated to the particular structure of the system. But
+probabilities are more tractable in our analysis, as well as
+better than the common ``anonymity set'' representation.} In
+particular:
 
 \begin{itemize}
 \item  The number of users of the system is positively correlated to the
@@ -289,16 +294,16 @@
 node can undetectably blend their message into their node's traffic,
 so an observer cannot know even when the message is sent.
 
-\item  The relation between the number of nodes and the probability
-of remaining anonymous might not be monotonic. At parity of traffic,
-sensitive agents might want fewer nodes in order to maintain large anonymity
-sets. But if some nodes are dishonest, users may prefer
-more honest nodes (to increase the chance that messages go through honest
-nodes). Agents that act as nodes may prefer fewer nodes,
-to maintain larger anonymity sets at their particular node.
-Hence the probability of remaining anonymous is inversely related to the
-number of nodes but positively related to the ratio of honest/dishonest
-nodes.
+\item  The relation between the number of nodes and the
+probability of remaining anonymous might not be monotonic. For a
+given amount of traffic, sensitive agents might want fewer nodes
+in order to maintain large anonymity sets. But if some nodes are
+dishonest, users may prefer more honest nodes (to increase the
+chance that messages go through honest nodes). Agents that act as
+nodes may prefer fewer nodes, to maintain larger anonymity sets at
+their particular node. Hence the probability of remaining
+anonymous is inversely related to the number of nodes but
+positively related to the ratio of honest/dishonest nodes.
 \end{itemize}
 
 If we assume that honest nodes always deliver messages that go through them,
@@ -328,18 +333,20 @@
 more often; see Section \ref{sec:alternate-incentives}. In addition, when
 message delivery is guaranteed, a node might always choose a longer route to
 reduce risk. We could assign a higher $c_{s}$ to longer routes to reflect
-the cost of additional delay. 
+the cost of additional delay.
 %In general,
 %the difference of $c_{s}$ and $c_{n}$ reflects the delay caused by using
 %the mix-net system.
 We also include here the cost of receiving dummy traffic, $c_r$.
 
-\item  Costs of acting as an honest node, $c_{h}$, by receiving and
-forwarding traffic, creating dummy traffic, or being an exit node (which
-involves potential exposure to liability from abuses). There are both
-fixed and variable costs of being a node. The fixed costs are related
-to the investments necessary to setup the software. The variable costs
-are dominated by the costs of traffic passing through the node.
+\item  Costs of acting as an honest node, $c_{h}$, by receiving
+and forwarding traffic, creating dummy traffic, or being an exit
+node (which involves potential exposure to liability from abuses).
+There costs can be variable or fixed. The fixed costs, for
+example, are related to the investments necessary to setup the
+software. The variable costs are often more significant, and are
+dominated by the costs of traffic passing through the node.
+%Is this true that they are often more significant?
 
 \item  Costs of acting as dishonest node, $c_{d}$ (again carrying traffic;
 and being exposed as a dishonest node may carry a monetary penalty).
@@ -350,107 +357,125 @@
 anonymous messages, being perceived to act as a reliable node, and being
 thought to act as a dishonest node.
 
-Some of these reputation costs and benefits can be modeled endogenously (for
-example, being perceived as an honest node brings that node more traffic, and
-therefore more possibilities to hide that node's messages; similarly, being
-perceived as a dishonest node might bring traffic away from that node).
-They would enter the utility functions only indirectly through the
-changes they provoke in the behavior of the agents. In other cases,
-reputation costs and benefits might be valued per se. While we do not
-consider this option in the simplified model below, we later comment on
-the impact that reputation effects can have on the model.
+Some of these reputation costs and benefits could be modelled
+endogenously (for example, being perceived as an honest node
+brings that node more traffic, and therefore more possibilities to
+hide that node's messages; similarly, being perceived as a
+dishonest node might bring traffic away from that node). In this
+case, they would enter the utility functions only indirectly
+through the changes they provoke in the behavior of the agents. In
+other cases, reputation costs and benefits might be valued
+\textit{per se}. While we do not consider this option in the
+simplified model below, we later comment on the impact that
+reputation effects can have on the model.
 
-We assume that agents want to maximize their expected utility, which is a
-function of expected benefits minus expected costs. We represent the payoff
-function for each agent $i$ in the following form:
+We assume that agents want to maximize their expected utility,
+which is a function of expected benefits minus expected costs. Let
+$S_{i}$ denote the set of strategies available to player $i$, and
+$s_{i}$ a certain member of that set.\ Each strategy $s_{i}$ is
+based on the the
+actions $a_{i}$ discussed above. The combination of strategies $%
+(s_{1},...,s_{N})$, one for each player, determines the outcome of
+the game and the associated payoff for each agent. Hence, for each
+complete strategy profile $s=(s_{1},...,s_{N})$ each agent
+receives a von Neumann-Morgenstern utility $u_{i}\left( s\right)$.
+We represent the payoff function for each agent $i$ in the
+following form:
 
 \begin{equation*}
-u_{i}=u\left( 
+u_{i}=u\left(
 \begin{array}{c}
-\theta \left[ \gamma \left( v_{r},p_{r}\left( n_{h},n_{d}\right) \right)
-,\partial \left( v_{a},p_{a}\left( n_{s},n_{h},n_{d},a_{i}^{s}\right)
-\right) ,a_{i}^{s}\right] \, + \, 
+\theta \left[ \gamma \left( v_{r_{i}},p_{r}\left(
+n_{h},n_{d}\right) \right) ,\partial \left( v_{a_{i}},p_{a}\left(
+n_{s},n_{h},n_{d},a_{i}^{s}\right) \right) ,a_{i}^{s}\right] \, +
+\,
 b_{h}a_{i}^{h}+\\
 b_{d}a_{i}^{d} - c_{s}\left( n_{s},n_{h}\right)
-a_{i}^{s}-c_{h}\left( n_{s},n_{h},n_{d}\right) a_{i}^{h}-c_{d}\left(
-..\right) a_{i}^{d}-c_{r}\left( ..\right) a_{i}^{r}-c_{n}
-% FIXME should this end with - $c_na^n$, rather than just $c_n$ ?
+a_{i}^{s}-c_{h}\left( n_{s},n_{h},n_{d}\right)
+a_{i}^{h}-c_{d}\left( ..\right) a_{i}^{d}-c_{r}\left( ..\right)
+a_{i}^{r}-c_{n}a_{i}^{n}
 \end{array}
 \right)
 \end{equation*}
 
-\noindent where $u, \theta, \gamma$, and $\partial$ are unspecified functional forms.
-The payoff function $u$ includes the costs and benefits for all the possible
-actions of the agents, including \textit{not} using the mix-net and instead
-sending the messages through a non-anonymous channel. We can represent these
-actions with dummy variables $a_{i}$.\footnote{%
+\noindent where $\theta, \gamma$, and $\partial$ are unspecified
+functional forms. The payoff function $u$ includes the costs and
+benefits for all the possible actions of the agents, including
+\textit{not} using the mix-net and instead sending the messages
+through a non-anonymous channel. We can represent the various
+strategies by using dummy variables for the various $a_{i}$.\footnote{%
 For example, if the agent chooses not to send the message anonymously, the
 probability of remaining anonymous $p_{a}$ will be equal to zero, $%
 a^{s,d,r,h}$ will be zero too, and the only cost in the function will be $%
-c_{n}$.} Note that $\gamma $ and $\partial$ describe the probability of a
-message being delivered and a message remaining anonymous, respectively.
-We weight these probabilities with the values $v_{r,a}$ because different
-agents might value anonymity and reliability differently, and because in
-different scenarios anonymity and reliability for the same agent might have
+c_{n}$.} Note that $\gamma $ and $\partial$ describe the
+probability of a message being delivered and a message remaining
+anonymous, respectively. We weight these probabilities with the
+values $v_{r_{i},a_{i}}$ because different agents might value
+anonymity and reliability differently, and because in different
+scenarios anonymity and reliability for the same agent might have
 different impacts on her payoff.
 
-Note also that the
-costs and benefits from sending the message might be distinct from the
-costs and benefits from keeping the \emph{information} anonymous. For
-example, when Alice anonymously purchases a book, she
-gains a profit equal to the difference between her valuation of the book
-and its price. But if her anonymity is compromised during the process, she
-incurs losses completely independent from the price of the book or her
-valuation of it. The payoff function $u_{i}$ above allows us to represent
-the duality implicit in all privacy issues, as well as the distinction
-between the value of sending a message and the value of keeping it anonymous:
+Note also that the costs and benefits from sending the message
+might be distinct from the costs and benefits from keeping the
+\emph{information} anonymous. For example, when Alice anonymously
+purchases a book, she gains a profit equal to the difference
+between her valuation of the book and its price. But if her
+anonymity is compromised during the process, she could incur
+losses (or miss profits) completely independent from the price of
+the book or her valuation of it. The payoff function $u$ above
+allows us to represent the duality implicit in all privacy issues,
+as well as the distinction between the value of sending a message
+and the value of keeping it anonymous:
 
 \begin{equation*}
 \begin{tabular}{|c|c|}
 \hline
 \textit{Anonymity} & \textit{Reliability} \\ \hline
-{\tiny \ 
+{\tiny \
 \begin{tabular}{c}
-Benefits from remaining anonymous / \\ 
+Benefits from remaining anonymous / \\
 costs avoided remaining anonymous, or
 \end{tabular}
-} & {\tiny 
+} & {\tiny
 \begin{tabular}{c}
-Benefits from sending a message which will be received / \\ 
+Benefits from sending a message which will be received / \\
 costs avoided sending a message, or
 \end{tabular}
 } \\ \hline
-{\tiny \ 
+{\tiny \
 \begin{tabular}{c}
-Costs due to losing anonymity / \\ 
+Costs due to losing anonymity / \\
 \ profits missed because of loss of anonymity
 \end{tabular}
-} & {\tiny 
+} & {\tiny
 \begin{tabular}{c}
-Costs due to not having sent a message / \\ 
+Costs due to not having sent a message / \\
 \ profits missed because of not having sent a message
 \end{tabular}
 } \\ \hline
 \end{tabular}
 \end{equation*}
 
-Henceforth, we always assume that the agent has an incentive to send a
-message as well as to keep it
-anonymous. We also always consider the direct benefits or losses rather than
-their dual opportunity costs or avoided costs. Nevertheless, the above
-representation allows us to formalize the various possible combinations.
-For example, if a certain message is sent to gain some benefit, but
-anonymity must be protected in order to avoid losses, then $v_{r}$ will be
-positive while $v_{a}$ will be negative and $p_{a}$ will enter the payoff
+Henceforth, we always assume that the agent has an incentive to
+send a message as well as to keep it anonymous. We also always
+consider the direct benefits or losses rather than their dual
+opportunity costs or avoided costs. Nevertheless, the above
+representation allows us to formalize the various possible
+combinations. For example, if a certain message is sent to gain
+some benefit, but anonymity must be protected in order to avoid
+losses, then $v_{r_{i}}$ will be positive while $v_{a_{i}}$ will
+be negative and $p_{a}$ will enter the payoff
 function as $\left( 1-p_{a}\right) $.\footnote{%
-Being certain of staying anonymous would therefore
-eliminate the risk of $v_{a}$, while being certain of losing anonymity would
-impose on the agent the full cost $v_{a}$.} On the other side, if the agent
-must send a certain message to avoid some losses but anonymity ensures her
-some benefits, then $v_{r}$ will be negative and $p_{r}$ will enter the
-payoff function as $\left( 1-p_{r}\right) $, while $v_{a}$ will be positive.%
-\footnote{Similarly, guaranteed delivery will eliminate the risk of
-losing $v_{r}$, while delivery failure will impose the full cost $v_{r}$.}
+Being certain of staying anonymous would therefore eliminate the
+risk of $v_{a_{i}}$, while being certain of losing anonymity would
+impose on the agent the full cost $v_{a_{i}}$.} On the other side,
+if the agent must send a certain message to avoid some losses but
+anonymity ensures her some benefits, then $v_{r_{i}}$ will be
+negative and $p_{r}$ will enter the
+payoff function as $\left( 1-p_{r}\right) $, while $v_{a_{i}}$ will be positive.%
+\footnote{Similarly, guaranteed delivery will eliminate the risk
+of losing $v_{r_{i}}$, while delivery failure will impose the full
+cost $v_{r_{i}}$.}
 
 With this framework we can compare, for example, the losses due to
 compromised anonymity to the costs of protecting it. An agent will decide to
@@ -470,24 +495,26 @@
 systems.
 
 Consider a set of $n_{s}$ agents interested in sending anonymous
-communications. Imagine that there is only one system which can be used to
-send anonymous messages, and one other system to send non-anonymous
-messages. Each user has three options: only send her own messages through
-the mix-net; send her messages but also act as a node forwarding messages
-from other users; or don't use the system at all (by sending a message
-without anonymity, or by not sending the message at all). Thus
-initially we do not consider the strategy of choosing to be a bad node, or
-additional honest strategies like creating and receiving dummy traffic. We
-represent the game as a simultaneous-move, repeated game because of the
-large number of participants, and because earlier actions indicate
-only a weak commitment to future actions. A large group
-will have no discernable or agreeable order for the actions of all
-participants, so actions can be considered simultaneous. The limited
-commitment produced by earlier actions allow us to consider a repeated-game
+communications. Imagine that there is only one system which can be
+used to send anonymous messages, and one other system to send
+non-anonymous messages. Each user has three options: only send her
+own messages through the mix-net; send her messages but also act
+as a node forwarding messages from other users; or don't use the
+system at all (by sending a message without anonymity, or by not
+sending the message at all). Thus initially we do not consider the
+strategy of choosing to be a bad node, or additional honest
+strategies like creating and receiving dummy traffic. We represent
+the game as a simultaneous-move, repeated game because of the
+large number of participants and because of the impact of earlier
+actions on future strategies. A large group will have no
+discernable or agreeable order for the actions of all
+participants, so actions can be considered simultaneous. The
+limited commitment produced by earlier actions allow us to
+consider a repeated-game
 scenario.\footnote{%
 In Section \ref{sec:model} we have highlighted that for both nodes and
-simpler users variable costs are more significant than fixed costs.} 
-%Roger, is this the case or not? ie are traffic related costs the highest ones? 
+simpler users variable costs are more significant than fixed costs.}
+%Roger, is this the case or not? ie are traffic related costs the highest ones?
 These two considerations suggest against using a sequential approach
 of the Stackelberg type \cite[Ch. 3]{fudenberg-tirole-91}. For similar
 reasons we also avoid a ``war of attrition/bargaining model'' framework
@@ -527,43 +554,53 @@
 Reputation considerations might alter this point; see
 Section \ref{sec:alternate-incentives}.}
 
-If a user decides to be a node, her costs increase with the volume of
-traffic (we focus here on the traffic-based variable costs). We also
-assume that all agents know the number of agents using the
-system and which of them are acting as nodes. We also assume that all agents perceive the same
-level of anonymity in the system based on traffic and number of nodes.
-Finally, we imagine that agents use the system because they want
-to avoid potential losses from not being anonymous. This sensitivity to
-anonymity can be represented with the continuous variable
-$v_{i}=\left[ \b{v},\bar{v}\right] $. In other words, we initially focus on the goal of
-remaning anonymous given an adversary that can control some nodes and
-observe all communications. We later comment on the addition reliability
+If a user decides to be a node, her costs increase with the volume
+of traffic (we focus here on the traffic-based variable costs). We
+also assume that all agents know the number of agents using the
+system and which of them are acting as nodes. We also assume that
+all agents perceive the same level of anonymity in the system
+based on traffic and number of nodes. Finally, we imagine that
+agents use the system because they want to avoid potential losses
+from not being anonymous. This subjective sensitivity to anonymity
+is represented by $v_{a_{i}}$ (see Section \ref{sec:model}; we can
+initially imagine $v_{a_{i}}$ as a continuous variable with a
+certain distribution across all agents; see below). In other
+words, we initially focus on the goal of remaining anonymous given
+an adversary that can control some nodes and observe all
+communications. We later comment on the addition reliability
 issues.
 
-These assumptions let us reduce the utility function to:
+These assumptions let us re-write the payoff function presented in
+Section \ref{sec:model} in a simpler form:
 \begin{equation*}
-u_{i}=-v_{i}\left( 1-p_{a}\left( n_{s},n_{h},n_{d},a_{i}^{h}\right) \right)
--c_{s}a_{i}^{s}-c_{h}\left( n_{s},n_{h},n_{d}\right) a_{i}^{h}-c_{n}
+u_{i}=-v_{a_{i}}\left( 1-p_{a}\left(
+n_{s},n_{h},n_{d},a_{i}^{h}\right) \right)
+-c_{s}a_{i}^{s}-c_{h}\left( n_{s},n_{h},n_{d}\right)
+a_{i}^{h}-c_{n}
 \end{equation*}
-Thus each agent $i$ tries to \textit{minimize} the costs of sending
-messages and the risk of being tracked. The first component is the
-probability that anonymity will be lost given the number of agents sending
-messages, the number of them acting as honest and dishonest nodes, and
-the action $a$ of agent $i$ itself. This chance is weighted by $v_{i}$,
-the disutility an agent derives from its message being exposed. We also
-include the costs of sending a message through the mix-net system, acting
-as a node when there are $n_{s}$ agents sending messages over $n_{h}$
-and $n_{d}$ nodes, and sending messages through a non-anonymous system,
-respectively. Each period, a rational agent can compare the utility
-coming from each of these three one-period strategies.
+Thus each agent $i$ tries to \textit{minimize} the costs of
+sending messages and the risk of being tracked. The first
+component is the probability that anonymity will be lost given the
+number of agents sending messages, the number of them acting as
+honest and dishonest nodes, and the action $a$ of agent $i$
+itself. This chance is weighted by $v_{a_{i}}$, the disutility
+agent $i$ derives from its message being exposed. We also include
+the costs of sending a message through the mix-net system, acting
+as a node when there are $n_{s}$ agents sending messages over
+$n_{h}$ and $n_{d}$ nodes, and sending messages through a
+non-anonymous system, respectively. Each period, a rational agent
+can compare the utility coming from each of these three one-period
+strategies.
 \begin{equation*}
 \begin{tabular}{cc}
 Action & Payoff \\
-$a_{s}$ & $-v_{i}\left( 1-p_{a}\left( n_{s},n_{h},n_{d}\right) \right) -c_{s}
+$a_{s}$ & $-v_{a_{i}}\left( 1-p_{a}\left( n_{s},n_{h},n_{d}\right)
+\right) -c_{s}
 $ \\
-$a_{h}$ & $-v_{i}\left( 1-p_{a}\left( n_{s},n_{h},n_{d},a_{i}^{h}\right)
+$a_{h}$ & $-v_{a_{i}}\left( 1-p_{a}\left(
+n_{s},n_{h},n_{d},a_{i}^{h}\right)
 \right) -c_{s}-c_{h}\left( n_{s},n_{h},n_{d}\right) $ \\
-$a_{n}$ & $-v_{i}-c_{n}$%
+$a_{n}$ & $-v_{a_{i}}-c_{n}$%
 \end{tabular}
 \end{equation*}
 We do not explicitly allow the agent to choose \textit{not} to send a
@@ -572,17 +609,19 @@
 Also, we do not explicitly report the value of sending a successful message.
 Both are simplifications that do not alter the rest of the analysis.
 %FIXME following sentence is huge
-\footnote{We could insert an action $a^{0}$ with a certain disutility from
-not sending any message, and solve the problem of minimizing the expected
-losses. Or, we could have inserted in the payoff function for actions $%
-a^{s,h,n}$ also the utility of sending a successful message compared to not
-sending it (which could be interpreted also as an opportunity cost), and
-solve the dual problem of maximizing the expected utility. Either way, the
-``exit'' strategy for each agent will either be sending a message
-non-anonymously, or not sending it at all, depending on which option
-maximizes the expected benefits or minimizes the expected losses.
-Thereafter, we can simply compare the two other actions (being a user, or
-being also a node) to the locally optimal exit strategy.}
+\footnote{We could insert an action $a^{0}$ with a certain
+disutility from not sending any message, and then solve the
+problem of minimizing the expected
+losses. Or, we could insert in the payoff function for actions $%
+a^{s,h,n}$ also the utility of successfully sending a message
+compared to not sending it (which could be interpreted also as an
+opportunity cost), and solve the dual problem of maximizing the
+expected utility. Either way, the ``exit'' strategy for each agent
+will either be sending a message non-anonymously, or not sending
+it at all, depending on which option maximizes the expected
+benefits or minimizes the expected losses. Thereafter, we can
+simply compare the two other actions (being a user, or being also
+a node) to the optimal exit strategy.}
 
 While this model is simple, it allows us to highlight some of the dynamics
 that might take place in the decision process of agents willing to use a
@@ -591,52 +630,57 @@
 \subsubsection{Myopic Agents}
 
 Myopic agents do not consider the long-term consequences of their
-actions. They simply consider the status of the network and, depending
-on the payoffs of the one-period game, adopt a certain strategy. Suppose
-that a new agent with a privacy sensitivity $v_{i}$ is considering using
-a mix-net with $\bar{n}_{s}$ users and $\bar{n}_{h}$ honest nodes.
+actions. They simply consider the status of the network and,
+depending on the payoffs of the one-period game, adopt a certain
+strategy. Suppose that a new agent with a privacy sensitivity
+$v_{a_{i}}$ is considering using a mix-net with $\bar{n}_{s}$
+users and $\bar{n}_{h}$ honest nodes.
 
 Then if
 \begin{equation*}
 \begin{tabular}{c}
-$-v_{i}\left( 1-p_{a}\left( \bar{n}_{s}+1,\bar{n}_{h}+1,n_{d},a_{i}^{h}%
+$-v_{a_{i}}\left( 1-p_{a}\left( \bar{n}_{s}+1,\bar{n}_{h}+1,n_{d},a_{i}^{h}%
 \right) \right) -c_{s}-c_{h}\left( \bar{n}_{s}+1,\bar{n}_{h}+1,n_{d}\right)
 $ \\
-$<-v_{i}\left( 1-p_{a}\left( \bar{n}_{s}+1,\bar{n}_{h},n_{d}\right) \right)
+$<-v_{a_{i}}\left( 1-p_{a}\left(
+\bar{n}_{s}+1,\bar{n}_{h},n_{d}\right) \right)
 -c_{s},$ and \\
-$-v_{i}\left( 1-p_{a}\left( \bar{n}_{s}+1,\bar{n}_{h}+1,n_{d},a_{i}^{h}%
+$-v_{a_{i}}\left( 1-p_{a}\left( \bar{n}_{s}+1,\bar{n}_{h}+1,n_{d},a_{i}^{h}%
 \right) \right) -c_{s}-c_{h}\left( \bar{n}_{s}+1,\bar{n}_{h}+1,n_{d}\right)
 $ \\
-$<-v_{i}-c_{n}$%
+$<-v_{a_{i}}-c_{n}$%
 \end{tabular}
 \end{equation*}
 agent $i$ will choose to become a node in the mix-net. If
 \begin{equation*}
 \begin{tabular}{c}
-$-v_{i}\left( 1-p_{a}\left( \bar{n}_{s}+1,\bar{n}_{h}+1,n_{d},a_{i}^{h}%
+$-v_{a_{i}}\left( 1-p_{a}\left( \bar{n}_{s}+1,\bar{n}_{h}+1,n_{d},a_{i}^{h}%
 \right) \right) -c_{s}-c_{h}\left( \bar{n}_{s}+1,\bar{n}_{h}+1,n_{d}\right)
 $ \\
-$>-v_{i}\left( 1-p_{a}\left( \bar{n}_{s}+1,\bar{n}_{h},n_{d}\right) \right)
+$>-v_{a_{i}}\left( 1-p_{a}\left(
+\bar{n}_{s}+1,\bar{n}_{h},n_{d}\right) \right)
 -c_{s},$ and \\
-$-v_{i}\left( 1-p_{a}\left( \bar{n}_{s}+1,\bar{n}_{h},n_{d}\right) \right)
--c_{s}<-v_{i}-c_{n}$%
+$-v_{a_{i}}\left( 1-p_{a}\left(
+\bar{n}_{s}+1,\bar{n}_{h},n_{d}\right) \right)
+-c_{s}<-v_{a_{i}}-c_{n}$%
 \end{tabular}
 \end{equation*}
 then agent $i$ will choose to be a user of the system. Otherwise, $i$ will
 simply not use the system.
 
-Our goal is to highlight the economic rationale
-implicit in the above inequalities. In the first case agent $i$ is
-comparing the contribution to her own anonymity of acting as a node to the
+Our goal is to highlight the economic rationale implicit in the
+above inequalities. In the first case agent $i$ is comparing the
+contribution to her own anonymity of acting as a node to the
 costs. Acting as a node dramatically increases anonymity, but it
-will also bring more traffic-related costs to the agent. Agents with high
-privacy sensitivity (high $v_{i}$) will be more likely to accept the
-trade-off and become nodes because they risk a lot by losing
-their anonymity, and because acting as nodes significantly increases their
-probabilities of remaining anonymous. On the other side, agents with a
-lower sensitivity to anonymity might decide that the costs or hassle
-of using the system are too high, and would not send the message (or
-would use non-anonymous channels).
+will also bring more traffic-related costs to the agent. Agents
+with high privacy sensitivity (high $v_{a_{i}}$) will be more
+likely to accept the trade-off and become nodes because they risk
+a lot by losing their anonymity, and because acting as nodes
+significantly increases their probabilities of remaining
+anonymous. On the other side, agents with a lower sensitivity to
+anonymity might decide that the costs or hassle of using the
+system are too high, and would not send the message (or would use
+non-anonymous channels).
 
 \subsubsection{Strategic Agents: Simple Case.}
 
@@ -697,47 +741,51 @@
 model analyzed in \cite{palfrey-rosenthal-89} where two players decide
 simultaneously whether to contribute to a public good.
 
-In our model, when for example $v_{i} \gg v_{j}$ and $v_{i}$ is large,
-the disutility to player $i$ from not using the system or not being
-a node will be so high that she will decide to be a node even if $j$
-might free ride on her. Hence if $j$ values her anonymity, but not that
-much, the strategies $a_{i}^{h}$,$a_{j}^{s}$ can be an equilibrium of
-the repeated game.
+In our model, when for example $v_{a_{i}} \gg v_{a_{j}}$ and
+$v_{a_{i}}$ is large, the disutility to player $i$ from not using
+the system or not being a node will be so high that she will
+decide to be a node even if $j$ might free ride on her. Hence if
+$j$ values her anonymity, but not that much, the strategies
+$a_{i}^{h}$,$a_{j}^{s}$ can be an equilibrium of the repeated
+game.
 
-In fact, this model might have equilibria with free-riding even when
-the other agent's type is unknown. Let's imagine that both agents know
-that the valuations $v_{i},v_{j}$ are drawn independently from a continuous,
-monotonic probability distribution. Again, when one agent cares about
-her privacy enough, and/or believes that there is a high probability
-that the opponent would act as a dishonest node, then the agent will
-be better off protecting her own interests by becoming a node (again
-see \cite{palfrey-rosenthal-89}). Of course the more interesting cases
-are those when these clear-cut scenarios do not arise, which we
-consider next.
+In fact, this model might have equilibria with free-riding even
+when the other agent's type is unknown. Let's imagine that both
+agents know that the valuations $v_{a_{i}},v_{a_{j}}$ are drawn
+independently from a continuous, monotonic probability
+distribution. Again, when one agent cares about her privacy
+enough, and/or believes that there is a high probability that the
+opponent would act as a dishonest node, then the agent will be
+better off protecting her own interests by becoming a node (again
+see \cite{palfrey-rosenthal-89}). Of course the more interesting
+cases are those when these clear-cut scenarios do not arise, which
+we consider next.
 
 \subsubsection{Strategic Agents: Multi-player Case.}
 
-Each player now considers the strategic decisions of a vast number of
-other players. Fudenberg and Levine \cite{fudenberg88} propose
-a model where each player plays a large set of identical players, each of which is
-``infinitesimal'', i.e. its actions cannot affect the payoff of the first
-player. We define the payoff of each player as
-the average of his payoffs against the distribution of strategies played by
-the continuum of the other players. In other words, for each type, we will
-have: $u_{i}=\sum_{n_{s}}u_{i}\left( a_{i},a_{-i}\right) $ where the
-notation represents the comparison between one specific agent $i$ and all the
-others. Cooperative solutions with a finite horizon are often not
-sustainable when the actions of other agents are not observable because,
-by backward induction, each agent will have an incentive to deviate. As
-compared to the analysis above with only two agents, now a defection
-of one 
-agent might affect only infinitesimally the payoff of the other agents, so
-the agents might tend not to punish the defector. But then, more agents
-will tend to deviate and the cooperative equilibrium might collapse.
-``Defection'', in fact, could be acting only as a user and refusing to
-be a node when the agent starts realizing that there is enough anonymity in the
-system and she no longer needs to be a node. But if too many agents
-act this way, the system might break down for lack of nodes, after which
+Each player now considers the strategic decisions of a vast number
+of other players. Fudenberg and Levine \cite{fudenberg88} propose
+a model where each player plays a large set of identical players,
+each of which is ``infinitesimal'', i.e. its actions cannot affect
+the payoff of the first player. We define the payoff of each
+player as the average of his payoffs against the distribution of
+strategies played by the continuum of the other players. In other
+words, for each agent, we will have:
+$u_{i}=\sum_{n_{s}}u_{i}\left( a_{i},a_{-i}\right) $ where the
+notation represents the comparison between one specific agent $i$
+and all the others. Cooperative solutions with a finite horizon
+are often not sustainable when the actions of other agents are not
+observable because, by backward induction, each agent will have an
+incentive to deviate from the cooperative strategy. As compared to
+the analysis above with only two agents, now a defection of one
+agent might affect only infinitesimally the payoff of the other
+agents, so the agents might tend not to punish the defector. But
+then, more agents will tend to deviate and the cooperative
+equilibrium might collapse. ``Defection'', in fact, could be
+acting only as a user and refusing to be a node when the agent
+starts realizing that there is enough anonymity in the system and
+she no longer needs to be a node. But if too many agents act this
+way, the system might break down for lack of nodes, after which
 everybody would have to resort to non-anonymous channels.
 
 We can consider this to be a ``public good with free-riding'' type of
@@ -757,46 +805,48 @@
 has a cost for all agents. In addition, coordination costs might be
 prohibitive. This is not a viable strategy.
 
-Second, we must remember that highly sensitive agents, at parity of traffic,
-prefer to be nodes (because anonymity and reliability will increase) and
-prefer to work in systems with fewer nodes (else traffic gets too
-dispersed and the anonymity sets get too small). So, if $-v_{i}-c_{n}$ is
-particularly high, i.e. if the cost of not having anonymity is very high for
-the most sensitive agents, then the latter will decide to act as nodes
-regardless of what the others do. Also, if there are enough agents with
-lower $v_{i}$, again a ``high'' type
-might have an interest in acting alone if its costs of not having anonymity
-would be too high compared to the costs of handling the traffic of the less
-sensitive types. 
+Second, we must remember that highly sensitive agents, for a given
+amount of traffic, prefer to be nodes (because anonymity and
+reliability will increase) and prefer to work in systems with
+fewer nodes (else traffic gets too dispersed and the anonymity
+sets get too small). So, if $-v_{a_{i}}-c_{n}$ is particularly
+high, i.e. if the cost of not having anonymity is very high for
+the most sensitive agents, then the latter will decide to act as
+nodes regardless of what the others do. Also, if there are enough
+agents with lower $v_{a_{i}}$, again a ``high'' type might have an
+interest in acting alone if its costs of not having anonymity
+would be too high compared to the costs of handling the traffic of
+the less sensitive types.
 
-In fact, when the valuations are continously distributed this
+In fact, when the valuations are continuously distributed this
 \emph{might} generate equilibria where the agents with the highest
-valuations $v_{i}$ become nodes, and the others, starting with the
-``marginal'' type (the agent indifferent between the
-benefits she would get from acting as node and the added costs of doing
-so) provide traffic.\footnote{Writing down specific equilibria, again,
+valuations $v_{a_{i}}$ become nodes, and the others, starting with
+the ``marginal'' type (the agent indifferent between the benefits
+she would get from acting as node and the added costs of doing so)
+provide traffic.\footnote{Writing down specific equilibria, again,
 will first involve choosing appropriate anonymity metrics, which
-might be system-dependent.} This problem can be mapped to the solutions in
-\cite {bergstrom-blume--varian-86} or \cite{mackiemason-varian-95}. At
-that point an equilibrium level of free-riding might be reached. This
-condition can be also compared to \cite
-{grossman-stiglitz-80}, where the paradox of informationally efficient
+might be system-dependent.} This problem can be mapped to the
+solutions in \cite {bergstrom-blume--varian-86} or
+\cite{mackiemason-varian-95}. At that point an equilibrium level
+of free-riding might be reached. This condition can be also
+compared to \cite {grossman-stiglitz-80}, where the paradox of
+informationally efficient
 markets is described.\footnote{%
 The equilibrium in \cite{grossman-stiglitz-80} relies on the ``marginal''
 agent who is indifferent between getting more information about the market
 and not getting it. We are grateful to Hal Varian for highlighting this for
 us.}
 
-The problems start if we consider now a different situation. Rather than
-having a continuous distribution of valuations $v_{i}$, we consider two
-types of agents: the agent with a high valuation, $v_{H}$, and the agent
-with a low valuations, $v_{L}$. We assume that the $v_{L}$ agents will simply participate
-sending traffic if the system is cheap enough for them to use (but see
-Section \ref{sec:bootstrapping}),
-and we also assume this will not pose any problem to the $v_{H}$
-type, which in fact has an interest in having more traffic. Thus
-we can focus on the interaction between a subset of users: the identical
-high-types. 
+The problems start if we consider now a different situation.
+Rather than having a continuous distribution of valuations
+$v_{a_{i}}$, we consider two types of agents: the agent with a
+high valuation, $v_{H}$, and the agent with a low valuation,
+$v_{L}$. We assume that the $v_{L}$ agents will simply participate
+sending traffic if the system is cheap enough for them to use (but
+see Section \ref{sec:bootstrapping}), and we also assume this will
+not pose any problem to the $v_{H}$ type, which in fact has an
+interest in having more traffic. Thus we can focus on the
+interaction between a subset of users: the identical high-types.
 
 Here the marginal argument discussed above might not work, and coordination
 might be costly. In order to have a scenario where the system is
@@ -830,7 +880,7 @@
 limited services for free, because they need their traffic as noise.
 %(The revelation principle \cite{fudenberg-tirole-91} indicates
 %that the agent can concentrate on mechanisms where all the agents truthfully
-%reveal their sensitivities.) 
+%reveal their sensitivities.)
 %\footnote{%
 %A ``mechanism'' is a game where agents send messages and a certain
 %allocation that depends on the realized messages (for a textbook
@@ -866,15 +916,15 @@
 acting as nodes.
 
 \item  \emph{Public rankings and reputation}. A higher reputation
-not only attracts more cover traffic but is also a reward in itself.
-Just as the statistics pages for seti@home \cite{seti-stats} encourage
-participation, publically ranking generosity creates an incentive to
-participate. Although the incentives of public recognition and public
-good don't fit in our model very well, we emphasize them because they
-explain most actual current
-node operators. As discussed above, reputation can enter the utility
-function indirectly or directly (when agents value their reputation as
-a good itself).
+not only attracts more cover traffic but is also a reward in
+itself. Just as the statistics pages for seti@home
+\cite{seti-stats} encourage participation, publicly ranking
+generosity creates an incentive to participate. Although the
+incentives of public recognition and public good don't fit in our
+model very well, we emphasize them because they explain most
+actual current node operators. As discussed above, reputation can
+enter the utility function indirectly or directly (when agents
+value their reputation as a good itself).
 
 If we publish a list of nodes ordered by safety (based on number of messages
 passing through the node), the high-sensitivity agents will
@@ -1042,14 +1092,15 @@
 %evidence as well as surverys and experimental results have shown how even
 %those individuals who claim to care about their privacy are unwilling to pay
 %even small amounts to defend it - or, viceversa, are ready to trade it for
-%small rewards. 
+%small rewards.
 %In light of the comments by jbash about comparing different systems, would you like to keep the previous comment in the text or not?
 %
 
-Difficulties in bootstrapping the system and the myopic behavior \cite
-{acquisti-varian-02} of some users might make the additional incentive
-mechanisms discussed in Section \ref{sec:alternate-incentives} preferrable
-to a market-only solution.
+Difficulties in bootstrapping the system and the myopic behavior
+\cite {acquisti-varian-02} of some users might make the additional
+incentive mechanisms discussed in Section
+\ref{sec:alternate-incentives} preferable to a market-only
+solution.
 
 \subsection{Customization And Preferential Service Are Risky Too}
 
@@ -1156,8 +1207,9 @@
 
 \section*{Acknowledgments}
 
-Work on this paper was supported by ONR.\@ Thanks to John Bashinski,
-Nick Mathewson, and the anonymous referees for helpful comments.
+Work on this paper was supported by ONR.\@ Thanks to John
+Bashinski, Nick Mathewson, Hal Varian, and the anonymous referees
+for helpful comments.
 
 \bibliographystyle{plain}
 \bibliography{econymics}

***********************************************************************
To unsubscribe, send an e-mail to majordomo@seul.org with
unsubscribe freehaven-cvs       in the body. http://freehaven.net/