[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-bugs] #2536 [Tor Relay]: Disable outgoing token bucket and reduce token bucket refill interval



#2536: Disable outgoing token bucket and reduce token bucket refill interval
-------------------------+--------------------------------------------------
 Reporter:  karsten      |          Owner:  karsten           
     Type:  enhancement  |         Status:  assigned          
 Priority:  normal       |      Milestone:  Tor: 0.2.3.x-final
Component:  Tor Relay    |        Version:                    
 Keywords:               |         Parent:                    
   Points:               |   Actualpoints:                    
-------------------------+--------------------------------------------------

Comment(by Flo):

 We can absolutely understand your concerns and Nick always had a point
 there. We are just excited about the performance improvements obtained
 with our initial proposal, so that we tried to find a way to bring it on.
 Nevertheless you are right that it might be a good first step to implement
 a modification as you suggest, even though we expect that it will not
 overcome all the performance issue under all circumstances then.

 In order to meet your comments, we drafted a different, alternative design
 that strictly maintains configured bandwidth limits.

 Let us start from our original patch (cf. tokenbucket proposal). Thus, we
 have a regular token bucket on the reading side with a certain rate and a
 certain burst size. Let x denote the current amount of tokens in the
 bucket. On the outgoing side we need something appropriate that monitors
 and constrains the outgoing rate, but at the same time avoids holding back
 cells (cf. double door effects) whenever possible.
 Here we propose something that adopts the role of a token bucket, but
 realizes this functionality in a slightly different way. We call it a
 "credit bucket". Like a token bucket, the credit bucket also has a current
 fill level, denoted by y. However, the credit bucket is refilled in a
 different way.

 To understand how it works, let us look at the possible operations:

 * As said, x is the fill level of a regular token bucket on the incoming
 side and thus gets incremented periodically according to the configured
 rate. No changes here.

 * If x<=0, we are obviously not allowed to read.

 * If x>0, we are allowed to read up to x bytes of incoming data. If k
 bytes are read (k<=x), then we update x and y as follows:

   x = x - k     (1)
   y = y + k     (2)

 (1) is the standard token bucket operation on the incoming side. Whenever
 data is admitted in, though, an additional operation is performed: (2)
 allocates the same number of bytes on the outgoing side, which will later
 on allow the same number of bytes to leave the onion router without any
 delays.

 * If y + x > -M, we are allowed to write up to y + x + M bytes on the
 outgoing side, where M is a positive constant. M specifies a burst size
 for the outgoing side. M should be higher than the number of tokens that
 get refilled during a refill interval, we would suggest to have M in the
 order of a few seconds "worth" of data. Now if k bytes are written on the
 outgoing side, we proceed as follows:

  1) If k <= y then y = y - k
 In this case we use "saved" credits, previously allocated on the incoming
 side when incoming data has been processed.

  2) If k > y then y = 0 and x = x - (k-y)
 We generated additional traffic in the onion router, so that more data is
 to be sent than has been read (the credit is not sufficient). We therefore
 "steal" tokens from the token buffer on the incoming side to compensate
 for the additionally generated data. This will result in correspondingly
 less data being read on the incoming side subsequently. As a result of
 such an operation, the token bucket fill level x on the incoming side may
 become negative (but it can never fall below -M).

 * If y + x <= -M then outgoing data will be held back. This may lead to
 double-door effects, but only in extreme cases where the outgoing traffic
 largely exceeds the incoming traffic, so that the outgoing bursts size M
 is exceeded.

 Aside from short-term bursts of configurable size (as with every token
 bucket), this procedure guarantees that the configured rate may never be
 exceeded (on the application layer, that is; as with the current
 implementation, an attacker may easily cause the onion router to
 arbitrarily exceed the limits on the lower layers). Over time, we never
 send more data than the configured rate: every sent byte needs a
 corresponding token on the incoming side; this token must either have been
 consumed by an incoming byte before (it then became a "credit"), or it is
 "stolen" from the incoming bucket to compensate for data generated within
 the onion router.

 This modification can be implemented with moderate effort and requires
 changes only at the points where currently the token bucket operations are
 performed.

 We feel that this is not the be-all and end-all solution, because it again
 introduces a feedback loop between the incoming and the outgoing side. We
 therefore still hope that we will be able to come to a both simpler and
 more effective design in the future. However, we believe that what we
 proposed above is a good compromise between avoiding double-door effects
 to the furthest possible extent, strictly enforcing an application-layer
 data rate, and keeping the extent of changes to the code small.

 Comments are, as always, highly welcome.

-- 
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/2536#comment:11>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs