[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
[tor-commits] [torspec/master] Update Proposal #254 with latest circuit padding plans.
commit 3fed83a38d9d85cab6d0437184f3b8909ca0266b
Author: Mike Perry <mikeperry-git@xxxxxxxxxxxxxx>
Date: Mon Oct 29 19:45:58 2018 +0000
Update Proposal #254 with latest circuit padding plans.
---
proposals/254-padding-negotiation.txt | 620 ++++++++++------------------------
1 file changed, 181 insertions(+), 439 deletions(-)
diff --git a/proposals/254-padding-negotiation.txt b/proposals/254-padding-negotiation.txt
index ca5ad14..3b2c883 100644
--- a/proposals/254-padding-negotiation.txt
+++ b/proposals/254-padding-negotiation.txt
@@ -71,331 +71,65 @@ the circuit.
3. End-to-end circuit padding
-For circuit-level padding, we need two types of additional features: the
-ability to schedule additional incoming cells at one or more fixed
-points in the future, and the ability to schedule a statistical
+For circuit-level padding, we need the ability to schedule a statistical
distribution of arbitrary padding to overlay on top of non-padding
traffic (aka "Adaptive Padding").
-In both cases, these messages will be sent from clients to middle nodes
-using the "leaky pipe" property of the 'recognized' field of RELAY
-cells, allowing padding to originate from middle nodes on a circuit in a
-way that is not detectable from the Guard node.
+The statistical mechanisms that define padding are known as padding
+machines. Padding machines can be hardcoded in Tor, specified in the
+consensus, and custom research machines can be listed in Torrc.
-This same mechanism can also be used to request padding from the Guard
-node itself, to achieve link-level padding without the additional
-overhead requirements on middle nodes.
+3.1. Padding Machines
-3.1. Fixed-schedule padding message (RELAY_COMMAND_PADDING_SCHEDULE)
+Circuits can have either one or two state machines at both the origin and at a
+specified middle hop.
-The fixed schedule padding will be encoded in a
-RELAY_COMMAND_PADDING_SCHEDULE cell. It specifies a set of up to 80
-fixed time points in the future to send cells.
+Each state machine can contain up to three states ("Start", "Burst" and
+"Gap") governing their behavior. Not all states need to be used.
-XXX: 80 timers is a lot to allow every client to create. We may want to
-have something that checks this structure to ensure it actually
-schedules no more than N in practice, until we figure out how to
-optimize either libevent or timer scheduling/packet delivery. See also
-Section 4.3.
+Each state of a padding machine specifies either:
+ * A histogram describing inter-arrival cell delays; OR
+ * A parameterized distribution for inter-arrival cell delays
-The RELAY_COMMAND_PADDING_SCHEDULE body is specified in Trunnel as
-follows:
+In either case, the lower bound of the delay distribution can be specified as
+a parameter, or it can be learned by measuring the RTT of the circuit.
- struct relay_padding_schedule {
- u8 schedule_length IN [1..80];
+If the histogram is used, it has an additional special "infinity" bin that
+means "infinite delay".
- /* Number of microseconds before sending cells (cumulative) */
- u32 when_send[schedule_length];
-
- /* Number of cells to send at time point sum(when_send[0..i]) */
- u16 num_cells[schedule_length];
-
- /* Adaptivity: If 1, and server-originating cells arrive before the
- next when_send time, then decrement the next non-zero when_send
- index, so we don't send a padding cell then, too */
- u8 adaptive IN [0,1];
- };
-
-To allow both high-resolution time values, and the ability to specify
-timeout values far in the future, the time values are cumulative. In
-other words, sending a cell with when_send = [MAX_INT, MAX_INT, MAX_INT,
-0...] and num_cells = [0, 0, 100, 0...] would cause the relay to reply
-with 100 cells in 3*MAX_INT microseconds from the receipt of this cell.
-
-This scheduled padding is non-periodic. For any forms of periodic
-padding, implementations should use the RELAY_COMMAND_PADDING_ADAPTIVE
-cell from Section 3.2 instead.
-
-3.2. Adaptive Padding message (RELAY_COMMAND_PADDING_ADAPTIVE)
-
-The following message is a generalization of the Adaptive Padding
-defense specified in "Timing Attacks and Defenses"[2].
-
-The message encodes either one or two state machines, each of which can
-contain one or two histograms ("Burst" and "Gap") governing their
-behavior.
-
-The "Burst" histogram specifies the delay probabilities for sending a
-padding packet after the arrival of a non-padding data packet.
-
-The "Gap" histogram specifies the delay probabilities for sending
-another padding packet after a padding packet was just sent from this
-node. This self-triggering property of the "Gap" histogram allows the
-construction of multi-packet padding trains using a simple statistical
-distribution.
-
-Both "Gap" and "Burst" histograms each have a special "Infinity" bin,
-which means "We have decided not to send a packet".
-
-Each histogram is combined with state transition information, which
-allows a client to specify the types of incoming packets that cause the
-state machine to decide to schedule padding cells (and/or when to cease
-scheduling them).
-
-The client also maintains its own local histogram state machine(s), for
-reacting to traffic on its end.
-
-Note that our generalization of the Adaptive Padding state machine also
-gives clients full control over the state transition events, even
-allowing them to specify a single-state Burst-only state machine if
-desired. See Sections 3.2.1 and 3.2.2 for details.
-
-The histograms and the associated state machine packet layout is
-specified in Trunnel as follows:
-
- /* These constants form a bitfield to specify the types of events
- * that can cause transitions between state machine states.
- *
- * Note that SENT and RECV are relative to this endpoint. For
- * relays, SENT means packets destined towards the client and
- * RECV means packets destined towards the relay. On the client,
- * SENT means packets destined towards the relay, where as RECV
- * means packets destined towards the client.
- */
- const RELAY_PADDING_TRANSITION_EVENT_NONPADDING_RECV = 1;
- const RELAY_PADDING_TRANSITION_EVENT_NONPADDING_SENT = 2;
- const RELAY_PADDING_TRANSITION_EVENT_PADDING_SENT = 4;
- const RELAY_PADDING_TRANSITION_EVENT_PADDING_RECV = 8;
- const RELAY_PADDING_TRANSITION_EVENT_INFINITY = 16;
- const RELAY_PADDING_TRANSITION_EVENT_BINS_EMPTY = 32;
-
- /* Token Removal rules. Enum, not bitfield. */
- const RELAY_PADDING_REMOVE_NO_TOKENS = 0;
- const RELAY_PADDING_REMOVE_LOWER_TOKENS = 1;
- const RELAY_PADDING_REMOVE_HIGHER_TOKENS = 2;
- const RELAY_PADDING_REMOVE_CLOSEST_TOKENS = 3;
-
- /* This payload encodes a histogram delay distribution representing
- * the probability of sending a single RELAY_DROP cell after a
- * given delay in response to a non-padding cell.
- *
- * Payload max size: 113 bytes
- */
- struct burst_state {
- u8 histogram_len IN [2..51];
- u16 histogram[histogram_len];
- u32 start_usec;
- u16 max_sec;
-
- /* This is a bitfield that specifies which direction and types
- * of traffic that cause us to abort our scheduled packet and
- * return to waiting for another event from transition_burst_events.
- */
- u8 transition_start_events;
-
- /* This is a bitfield that specifies which direction and types
- * of traffic that cause us to remain in the burst state: Cancel the
- * pending padding packet (if any), and schedule another padding
- * packet from our histogram.
- */
- u8 transition_reschedule_events;
-
- /* This is a bitfield that specifies which direction and types
- * of traffic that cause us to transition to the Gap state. */
- u8 transition_gap_events;
-
- /* If true, remove tokens from the histogram upon padding and
- * non-padding activity. */
- u8 remove_tokens IN [0..3];
- };
-
- /* This histogram encodes a delay distribution representing the
- * probability of sending a single additional padding packet after
- * sending a padding packet that originated at this hop.
- *
- * Payload max size: 113 bytes
- */
- struct gap_state {
- u8 histogram_len IN [2..51];
- u16 histogram[histogram_len];
- u32 start_usec;
- u16 max_sec;
-
- /* This is a bitfield which specifies which direction and types
- * of traffic should cause us to transition back to the start
- * state (ie: abort scheduling packets completely). */
- u8 transition_start_events;
-
- /* This is a bitfield which specifies which direction and types
- * of traffic should cause us to transition back to the burst
- * state (and schedule a packet from the burst histogram). */
- u8 transition_burst_events;
-
- /* This is a bitfield that specifies which direction and types
- * of traffic that cause us to remain in the gap state: Cancel the
- * pending padding packet (if any), and schedule another padding
- * packet from our histogram.
- */
- u8 transition_reschedule_events;
-
- /* If true, remove tokens from the histogram upon padding and
- non-padding activity. */
- u8 remove_tokens IN [0..3];
- };
-
- /* Payload max size: 227 bytes */
- struct adaptive_padding_machine {
- /* This is a bitfield which specifies which direction and types
- * of traffic should cause us to transition to the burst
- * state (and schedule a packet from the burst histogram). */
- u8 transition_burst_events;
-
- struct burst_state burst;
- struct gap_state gap;
- };
-
- /* This is the full payload of a RELAY_COMMAND_PADDING_ADAPTIVE
- * cell.
- *
- * Payload max size: 455 bytes
- */
- struct relay_command_padding_adaptive {
- /* Technically, we could allow more than 2 state machines here,
- but only two are sure to fit. More than 2 seems excessive
- anyway. */
- u8 num_machines IN [1,2];
-
- struct adaptive_padding_machine machines[num_machines];
- };
-
-3.2.1. Histogram state machine operation
-
-Each of pair of histograms ("Burst" and "Gap") together form a state
-machine whose transitions are governed by incoming traffic and/or
-locally generated padding traffic.
-
-Each state machine has a Start state S, a Burst state B, and a Gap state
-G.
-
-The state machine starts idle (state S) until it receives a packet of a
-type that matches the bitmask in machines[i].transition_burst_events. If
-machines[i].transition_burst_events is 0, transition to the burst state
-happens immediately.
-
-This causes it to enter burst mode (state B), in which a delay t is
-sampled from the Burst histogram, and a timer is scheduled to count down
-until either another matching packet arrives, or t expires. If the
-"Infinity" time is sampled from this histogram, the machine returns to
-the lowest state with the INFINITY event bit set.
-
-If a packet that matches machines[i].burst.transition_start_events
-arrives before t expires, the machine transitions back to the Start
+The state can also provide an optional parameterized distribution that
+specifies how many total cells (or how many padding cells) can be sent on the
+circuit while the machine is in this state, before it transitions to a new
state.
-If a packet that matches machines[i].burst.transition_reschedule_events
-arrives before t expires, a new delay is sampled and the process is
-repeated again, i.e. it remains in burst mode.
-
-Otherwise, if t expires, a padding message is sent to the other end.
-
-If a packet that matches machines[i].burst.transition_gap_events
-arrives (or is sent), the machine transitions to the Gap state G.
-
-In state G, the machine samples from the Gap histogram and sends padding
-messages when the time it samples expires. If an infinite delay is
-sampled while being in state G we jump back to state B or S,
-depending upon the usage of the infinity event bitmask.
-
-If a packet arrives that matches gap.transition_start_events, the
-machine transitions back to the Start state.
-
-If a packet arrives that matches gap.transition_burst_events, the
-machine transitions back to the Burst state.
-
-If a packet arrives that matches
-machines[i].gap.transition_reschedule_events, the machine remains in G
-but schedules a new padding time from its Gap histogram.
-
-In the event that a malicious or buggy client specifies conflicting
-state transition rules with the same bits in multiple transition
-bitmasks, the transition rules of a state that specify transition to
-earlier states take priority. So burst.transition_start_events
-takes priority over burst.transition_reschedule_events, and both of
-these take priority over burst.transition_gap_events.
-
-Similarly, gap.transition_start_events takes priority over
-gap.transition_burst_events, and gap.transition_burst_events takes
-priority over gap.transition_reschedule_events.
-
-In our generalization of Adaptive Padding, either histogram may actually
-be self-scheduling (by setting the bit
-RELAY_PADDING_TRANSITION_EVENT_PADDING_SENT in their
-transition_reschedule_events). This allows the client to create a
-single-state machine if desired.
-
-Clients are expected to maintain their own local version of the state
-machines, for reacting to their own locally generated traffic, in
-addition to sending one or more state machines to the middle relay. The
-histograms that the client uses locally will differ from the ones it
-sends to the upstream relay.
-
-On the client, the "SENT" direction means packets destined towards the
-relay, where as "RECV" means packets destined towards the client.
-However, on the relay, the "SENT" direction means packets destined
-towards the client, where as "RECV" means packets destined towards the
-relay.
-
-3.2.2. The original Adaptive Padding algorithm
-
-As we have noted, the state machines above represent a generalization of
-the original Adaptive Padding algorithm. To implement the original
-behavior, the following flags should be set in both the client and
-the relay state machines:
-
- num_machines = 1;
-
- machines[0].transition_burst_events =
- RELAY_PADDING_TRANSITION_EVENT_NONPADDING_SENT;
+Each state of a padding machine can react to the following cell events:
+ * Non-padding cell received
+ * Padding cell received
+ * Non-padding cell sent
+ * Padding cell sent
- machines[0].burst.transition_reschedule_events =
- RELAY_PADDING_TRANSITION_EVENT_NONPADDING_SENT;
+Additionally, padding machines emit the following internal events to themselves:
+ * Infinity bin was selected
+ * The histogram bins are empty
+ * The length count for this state was exceeded
- machines[0].burst.transition_gap_events =
- RELAY_PADDING_TRANSITION_EVENT_PADDING_SENT;
+Each state of the padding machine specifies a set of these events that cause
+it to cancel any pending padding, and a set of events that cause it to
+transition to another state, or transition back itself.
- machines[0].burst.transition_start_events =
- RELAY_PADDING_TRANSITION_EVENT_INFINITY;
+When an event causes a transition to a state (or back to the same state), a
+delay is sampled from the histogram or delay distribution, and padding cell is
+scheduled to be sent after that delay.
- machines[0].gap.transition_reschedule_events =
- RELAY_PADDING_TRANSITION_EVENT_PADDING_SENT;
+If a non-padding cell is sent before the timer, the timer is cancelled and a
+new padding delay is chosen.
- machines[0].gap.transition_burst_events =
- RELAY_PADDING_TRANSITION_EVENT_NONPADDING_SENT |
- RELAY_PADDING_TRANSITION_EVENT_INFINITY;
+3.1.1. Histogram Specification
-The rest of the transition fields would be 0.
-
-Adding additional transition flags will either increase or decrease the
-amount of padding sent, depending on their placement.
-
-The second machine slot is provided in the event that it proves useful
-to have separate state machines reacting to both sent and received
-traffic.
-
-3.2.3. Histogram decoding/representation
-
-Each of the histograms' fields represent a probability distribution that
-is expanded into bins representing time periods a[i]..b[i] as follows:
+If a histogram is used by a state (as opposed to a fixed parameterized
+distribution), then each of the histograms' fields represent a probability
+distribution that is expanded into bins representing time periods a[i]..b[i]
+as follows:
start_usec,max_sec,histogram_len initialized from appropriate histogram
body.
@@ -436,42 +170,100 @@ increasing bin width allows the histograms to most accurately represent
small interpacket delay (where accuracy is needed), and devote less
accuracy to larger timescales (where accuracy is not as important).
-3.2.4. Token removal and refill
+3.1.2 Histogram Token Removal
-If the remove_tokens field is set to a non-zero value for a given
-state's histogram, then whenever a padding packet is sent, the
-corresponding histogram bin's token count is decremented by one.
+Tokens can be optionally removed from histogram bins whenever a padding or
+non-padding packet is sent. With this token removal, the histogram functions
+as an overall target delay distribution for the machine while it is in that
+state.
-If a packet matching the current state's transition_reschedule_events
-bitmask arrives from the server before the chosen padding timer expires,
-then a token is removed from a non-empty bin corresponding to
-the delay since the last packet was sent, and the padding packet timer
-is re-sampled from the histogram.
+If token removal is enabled, when a padding packet is sent, a token is removed
+from the bin corresponding to the target delay. When a non-padding packet is
+sent, the actual delay from the previous packet is calculated, and the
+histogram bin corresponding to that delay is inspected. If that bin has
+tokens remaining, it is decremented.
+
+If the bin has no tokens left, the state removes a token from a different bin,
+as specified in its token removal rule. The following token removal options
+are defined:
+ * None -- Never remove any tokens
+ * Exact -- Only remove from the target bin, if it is empty, ignore it.
+ * Higher -- Remove from the next higher non-empty bin
+ * Lower -- Remove from the next higher non-empty bin
+ * Closest -- Remove from the closest non-empty bin by index
+ * Closest_time -- Remove from the closest non-empty bin by index, by time
+
+When all bins are empty in a histogram, the padding machine emits the internal
+"bins empty" event to itself.
+
+3.2. Machine Selection
+
+Clients will select which of the defined available padding machines to use
+based on the conditions that these machines specify. These conditions include:
+ * How many hops the circuit must be in order for the machine to apply
+ * If the machine requires vanguards to be enabled to apply
+ * The state the circuit must be in for machines to apply (building,
+ relay early cells remaining, opened, streams currently attached).
+ * If the circuit purpose matches a set of purposes for the machine.
+ * If the target hop of the machine supports circuit padding.
+
+Clients will only select machines whose conditions fully match given circuits.
+
+3.3. Machine Neogitation
+
+When a machine is selected, the client uses leaky-pipe delivery to send a
+RELAY_COMMAND_PADDING_NEGOTIATE to the target hop of the machine, using the
+following trunnel relay cell payload format:
+
+ /**
+ * This command tells the relay to alter its min and max netflow
+ * timeout range values, and send padding at that rate (resuming
+ * if stopped). */
+ struct circpad_negotiate {
+ u8 version IN [0];
+ u8 command IN [CIRCPAD_COMMAND_START, CIRCPAD_COMMAND_STOP];
+
+ /** Machine type is left unbounded because we can specify
+ * new machines in the consensus */
+ u8 machine_type;
+ };
-The three enums for the remove_tokens field govern if we take the token
-out of the nearest lower non-empty bin, the nearest higher non-empty
-bin, or simply the closest non-empty bin.
+Upon receipt of a RELAY_COMMAND_PADDING_NEGOTIATE cell, the middle node sends
+a RELAY_COMMAND_PADDING_NEGOTIATED with the following format:
+ /**
+ * This command tells the relay to alter its min and max netflow
+ * timeout range values, and send padding at that rate (resuming
+ * if stopped). */
+ struct circpad_negotiated {
+ u8 version IN [0];
+ u8 command IN [CIRCPAD_COMMAND_START, CIRCPAD_COMMAND_STOP];
+ u8 response IN [CIRCPAD_RESPONSE_OK, CIRCPAD_RESPONSE_ERR];
+
+ /** Machine type is left unbounded because we can specify
+ * new machines in the consensus */
+ u8 machine_type;
+ };
-If the entire histogram becomes empty, it is then refilled to the
-original values. This refill happens prior to any state transitions due
-to RELAY_PADDING_TRANSITION_EVENT_BINS_EMPTY (but obviously does not
-prevent the transition from happening).
+If the response field is CIRCPAD_RESPONSE_OK, padding was successfully
+negotiated. If it is CIRCPAD_RESPONSE_ERR, the machine is torn down and we do
+not pad.
-3.2.5. Constructing the histograms
+4. Examples of Padding Machines
-Care must be taken when constructing the histograms themselves, since
-their non-uniform widths means that the actual underlying probability
-distribution needs to be both normalized for total number of tokens, as
-well as the non-uniform histogram bin widths.
+In the original WTF-PAD design[2], the state machines are used as follows:
-Care should also be taken with interaction with the token removal rules
-from Section 3.2.4. Obviously using a large number of tokens will cause
-token removal to have much less of an impact upon the adaptive nature of
-the padding in the face of existing traffic.
+The "Burst" histogram specifies the delay probabilities for sending a
+padding packet after the arrival of a non-padding data packet.
-Actual optimal histogram and state transition construction for different
-traffic types is expected to be a topic for further research.
+The "Gap" histogram specifies the delay probabilities for sending
+another padding packet after a padding packet was just sent from this
+node. This self-triggering property of the "Gap" histogram allows the
+construction of multi-packet padding trains using a simple statistical
+distribution.
+
+Both "Gap" and "Burst" histograms each have a special "Infinity" bin,
+which means "We have decided not to send a packet".
Intuitively, the burst state is used to detect when the line is idle
(and should therefore have few or no tokens in low histogram bins). The
@@ -481,76 +273,71 @@ stalls, or has a gap.
The gap state is used to fill in otherwise idle periods with artificial
payloads from the server (and should have many tokens in low bins, and
-possibly some also at higher bins).
+possibly some also at higher bins). In this way, the gap state either
+generates entirely fake streams of cells, or extends real streams with
+additional cells.
+
+The Adaptive Padding Early implementation[3] uses parameterized distributions
+instead of histograms, but otherwise uses the states in the same way.
+
+It should be noted that due to our generalization of these states and their
+transition possibilities, more complicated interactions are also possible. For
+example, it is possible to simulate circuit extension, so that all circuits
+appear to continue to extend up until the RELAY_EARLY cell count is depleted.
-It should be noted that due to our generalization of these states and
-their transition possibilities, more complicated interactions are also
-possible.
+It is also possible to create machines that simulate traffic on unused
+circuits, or mimic onion service activity on clients that aren't otherwise
+using onion services.
-4. Security considerations and mitigations
+5. Security considerations and mitigations
The risks from this proposal are primarily DoS/resource exhaustion, and
side channels.
-4.1. Rate limiting and accounting
+5.1. Rate limiting
-Fully client-requested padding introduces a vector for resource
-amplification attacks and general network overload due to
-overly-aggressive client implementations requesting too much padding.
-
-Current research indicates that this form of statistical padding should
-be effective at overhead rates of 50-60%. This suggests that clients
-that use more padding than this are likely to be overly aggressive in
-their behavior.
+Current research[2,3] indicates that padding should be be effective against
+website traffic fingerprinting at overhead rates of 50-60%. Circuit setup
+behavior can be concealed with far less overhead.
We recommend that three consensus parameters be used in the event that
the network is being overloaded from padding to such a degree that
padding requests should be ignored:
- * CircuitPaddingMaxRatio
- - The maximum ratio of padding traffic to non-padding traffic
- (expressed as a percent) to allow on a circuit before ceasing
- to pad. Ex: 75 means 75 padding packets for every 100 non-padding
- packets.
- - Default: 120
- * CircuitPaddingLimitCount
+ * circpad_max_machine_padding_pct
+ - The maximum ratio of sent padding traffic to sent non-padding traffic
+ (expressed as a percent) to allow on a padding machine before ceasing
+ to pad. Ex: 75 means 75 padding packets for every 100
+ non-padding+padding packets. This definition is consistent with the
+ overhead values in Proposal #265.
+ * circpad_machine_allowed_cells
- The number of padding cells that must be transmitted before the
- ratio limit is applied.
- - Default: 5000
- * CircuitPaddingLimitTime
- - The time period in seconds over which to count padding cells for
- application of the ratio limit (ie: reset the limit count this
- often).
- - Default: 60
+ per-machine ratio limit is applied.
+ * circpad_max_global_padding_pct
+ - The maximum ratio of sent padding traffic to sent non-padding traffic
+ (expressed as a percent) to allow globally at a client or relay
+ before ceasing to pad. Ex: 75 means 75 padding packets for every 100
+ non-padding+padding packets. This definition is consistent with the
+ overhead values in Proposal #265.
+
+Additionally, each machine can specify its own per-machine limits for
+the allowed cell counters and padding overhead percentages.
-XXX: Should we cap padding at these rates, or fully disable it once
-they're crossed? Probably cap?
+When either global or machine limits are reached, padding is no longer
+scheduled. The machine simply becomes idle until the overhead drops below
+the threshold.
+
+5.2. Overhead accounting
In order to monitor the quantity of padding to decide if we should alter
these limits in the consensus, every node will publish the following
values in a padding-counts line in its extra-info descriptor:
- * write-drop-multihop
- - The number of RELAY_DROP cells sent by this relay to a next hop
- that is listed in the consensus.
- * write-drop-onehop
- - The number of RELAY_DROP cells sent by this relay to a next hop
- that is not listed in the consensus.
- * write-pad
- - The number of CELL_PADDING cells sent by this relay.
- * write-total
- - The total number of cells sent by this relay.
- * read-drop-multihop
- - The number of RELAY_DROP cells read by this relay from a hop
- that is listed in the consensus.
- * read-drop-onehop
- - The number of RELAY_DROP cells read by this relay from a hop
- that is not listed in the consensus.
- * read-pad
- - The number of CELL_PADDING cells read by this relay.
- * read-total
- - The total number of cells read by this relay.
+ * read_drop_cell_count
+ - The number of RELAY_DROP cells read by this relay.
+ * write_drop_cell_count
+ - The number of RELAY_DROP cells sent by this relay.
Each of these counters will be rounded to the nearest 10,000 cells. This
rounding parameter will also be listed in the extra-info descriptor line, in
@@ -560,65 +347,20 @@ In the future, we may decide to introduce Laplace Noise in a similar
manner to the hidden service statistics, to further obscure padding
quantities.
-4.2. Malicious state machines
-
-The state machine capabilities of RELAY_COMMAND_PADDING_ADAPTIVE are
-very flexible, and as a result may specify conflicting or
-non-deterministic state transitions.
-
-We believe that the rules in Section 3.2.1 for prioritizing transitions
-towards lower states remove any possibility of non-deterministic
-transitions.
-
-However, because of self-triggering property that allows the state
-machines to schedule more padding packets after sending their own
-locally generated padding packets, care must be taken with the
-interaction with the rate limiting rules in Section 4.1. If the limits
-in section 4.1 are exceeded, the state machines should stop, rather than
-continually poll themselves trying to transmit packets and being blocked
-by the rate limiter at another layer.
-
-4.3. Libevent timer exhaustion
-
-As mentioned in section 3.1, scheduled padding may create an excessive
-number of libevent timers. Care should be taken in the implementation to
-devise a way to prevent clients from sending padding requests
-specifically designed to impact the ability of relays to function by
-causing too many timers to be scheduled at once.
-
-XXX: Can we suggest any specifics here? I can imagine a few ways of
-lazily scheduling timers only when they are close to their expiry time,
-and other ways of minimizing the number of pending timer callbacks at a
-given time, but I am not sure which would be best for libevent.
-
-4.4. Side channels
+5.3. Side channels
In order to prevent relays from introducing side channels by requesting
-padding from clients, all of these commands should only be valid in the
-outgoing (from the client/OP) direction.
-
-Clients should perform accounting on the amount of padding that they
-receive, and if it exceeds the amount that they have requested, they
-alert the user of a potentially misbehaving node, and/or close the
-circuit.
-
-Similarly, if RELAY_DROP cells arrive from the last hop of a circuit,
-rather than from the expected interior node, clients should alert the
-user of the possibility of that circuit endpoint introducing a
-side-channel attack, and/or close the circuit.
-
-4.5. Memory exhaustion
-
-Because interior nodes do not have information on the current circuits
-SENDME windows, it is possible for malicious clients to consume the
-buffers of relays by specifying padding, and then not reading from the
-associated circuits.
+padding from clients, all of the padding negotiation commands are only
+valid in the outgoing (from the client/OP) direction.
-XXX: Tor already had a few flow-control related DoS's in the past[3]. Is
-that defense sufficient here without any mods? It seems like it may be!
+Similarly, to prevent relays from sending malicious padding from arbitrary
+circuit positions, if RELAY_DROP cells arrive from a hop other than that
+with which padding was negotiated, this cell is counted as invalid for
+purposes of CIRC_BW control port fields, allowing the vanguards addon to
+close the circuit upon detecting this activity.
-------------------
1. https://gitweb.torproject.org/torspec.git/tree/proposals/251-netflow-padding.txt
-2. http://freehaven.net/anonbib/cache/ShWa-Timing06.pdf
-3. https://blog.torproject.org/blog/new-tor-denial-service-attacks-and-defenses
+2. https://www.cs.kau.se/pulls/hot/thebasketcase-wtfpad/
+3. https://www.cs.kau.se/pulls/hot/thebasketcase-ape/
_______________________________________________
tor-commits mailing list
tor-commits@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits