[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Proposal 161: Computing Bandwidth Adjustments



On Wed, May 13, 2009 at 01:30:06AM -0400, Nick Mathewson wrote:
> > Each slice takes around 3-6 hours. The entire network can take as long
> > as a week.. Hence the desire to parallelize..
> 
> My intuition says that unless we scan the whole network every two days
> (or ideally less), our data will likely be useless.
> 
> What if, instead of running in a batch mode, we were constantly
> scanning and iteratively refining our guesses for ratios, discounting
> old data as newer data arrived?  We'd still want to parallelize some,
> but we wouldn't run into the problem of having some nodes' estimated
> values be far more accurate than others.

I think this is a key idea that we should try to do. Doing a run for
several hours before getting numbers means that our numbers will always
be several hours out of date. If our plan is to do continuous runs
anyway, then we should be able to have a "most recent estimate" after
each measurement, that combines that result with the previous results
in some smart way.

But that said, I'm fine with phase one being "do it the way Mike already
has in mind", in the interest of getting something up and working. I
think the formats we've defined so far won't have to change if we switch
to a better estimation method for attempt #2.

> > I'm not sure what to do with this second problem as it is
> > non-deterministic. Using the alpha smoothing factor from your reply to
> > 160 seems like it should help. I do actually track bandwidth changes
> > during the scan, I and I can write some code to abstain from
> > publishing updates to nodes whose bandwidths change drastically during
> > the scan. But what is "drastically", and is this edge case worth the
> > complexity, or is the alpha smoothing good enough?
> 
> I don't know, alas.  Is there a way we can measure how this is working
> in practice?  I think that if we refresh our bandwidth estimates only
> (say) every day or so, we will really hammer nodes that want to change
> their bandwidth downward, which seems poor.  Lagging on nodes that
> raise their bandwidth isn't quite as bad.
> 
> Roger -- any intuition on this one?  It is too hard for 01:30 EDT Nick.

It remains to be seen how much oscillation we'll actually have. I would
think it's better to oscillate more than to never quite get where we
should actually be. And the more up-to-date our measurement can be,
the less oscillation we should see.

I think if we figure that most relays have about the same actual capacity
from one hour to the next, then we ought to be able to make it work. The
actual parameters for damping will have to be determined experimentally.
But if we keep track of votes we can look afterwards to see how often we
overshoot. It will be a fun exercise to add to Karsten's (already full)
plate. :) Really, even if we do it horribly, I think we're still going
to be doing better than the current load balancing approach.

> > > Slightly less crazy idea 2: if the values change too fast, we could do
> > > a standard time-weighted average, where we compute the new declared
> > > bandwidth BW' as a weighted average of the old declared bandwidth
> > > and
> > > the new measured bandwidth.)
> >
> > Yeah, I like idea 2, basically what you mentioned in 160. This would
> > be done Tor-end, not scanner end, right?
> 
> I don't have a strong feeling about that.  To me, it seems easier to
> do it at the scanner end, or as a processing step between the scanner
> end and the authorities... but that's just because I'm thinking of the
> scanner stuff as easier to change if we think of a better algorithm.

Seems smartest to do it in the set of steps before Tor reads in the
file. Whether that's "in" the scanner, or in some intermediate phase,
doesn't much matter to Tor. But I think Tor's job should be to take the
number from the file and put it in its next vote (and then know how to
turn votes into a consensus bandwidth, including clipping to some cap,
modifying by some deterministic rounding-off process, etc).

--Roger