[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

descriptor bandwidth field questions



     When a server publishes its descriptor, it includes a "bandwidth" line
that has three fields.  The first two can be set by lines in torrc, and the
third is set by the server's recorded traffic levels in the preceding 24
hours or since the server started relaying traffic.  My questions are as
follows.

	1)  Are the first two set by BandwidthRate and BandwidthBurst?  Or
	by RealayBandwidthRate and RelayBandwidthBurst?

	2)  For the recorded (i.e., "observed") rate, which bytes count?
	Relayed bytes only?  Directory service bytes via DirListenAddress?
	Tunneled directory service bytes?  Hidden service invitation and
	rendezvous setup and teardown bytes?

     My third item takes a bit of explanation.  The third bandwidth field
in the descriptor, which holds the high-watermark rate sustained for at
least 10 seconds in the preceding 24 hours can vary upward toward the
actual capacity of the server and its network connection or downward away
from the actual limiting data rate of the server and its connection to the
Internet.  If a server goes down and comes back up again, it seems quite
reasonable that the published "observed" (read:  actually used) bandwidth
should find a reasonable starting point, probably based upon the initial
bandwidth self-test results, adjusting upward from there asymptotically
toward the real limit thereafter.  However, it is unclear to me why
successive updates ought ever to diverge from that limit if the server has
not gone down and been restarted.
     A matter that should be considered here is the feedback effect of
publishing a server's actually used bandwidth at a time when the tor network
happens to be getting reduced traffic upon the allocation of circuits and
streams to the server during the next 18+ hours after publication.  In
other words, publishing a reduced bandwidth due to temporarily lower demand
may tend to discourage full use of actually available bandwidth in the
future, leading to publication of lower "observed" bandwidth again, leading
to reduced traffic being sent through that serverr, resulting in publishing
a low value again, and so on.

	3) Publishing an "observed" rate that is the greater of the two
	values (i.e., actually used since the last update vs. the last
	published) would eliminate a source of feedback that artificially
	and unnecessarily gives clients a false picture of reduced
	available bandwidth, potentially leading to long-term underusage
	of actual server capacities.  Eliminating the feedback would
	allow servers to publish successively more accurate values for
	their capacities.  Is there a legitimate reason for ever publishing
	an "observed" bandwidth rate that is less than the previously
	published rate when the server has been up and running continuously
	since the last rate was published?

     If I missed something for 3), please point it out to me.  Thanks much!


                                  Scott Bennett, Comm. ASMELG, CFIAG
**********************************************************************
* Internet:       bennett at cs.niu.edu                              *
*--------------------------------------------------------------------*
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."                                               *
*    -- Gov. John Hancock, New York Journal, 28 January 1790         *
**********************************************************************