[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [or-cvs] Implemented link padding and receiver token buckets



> > But then surely, if you want to find out the bandwidth of real traffic on
> > the network, you just attack the network and then watch the traffic in the
> > next few seconds and it will all be real traffic. I think you have to
> > implement another one of these random back-off schemes I proposed: as the
> > network is songested, put in fewer and fewer dummies and then you may
> > rearrange the order of the dummies and real traffic in a fairly random
> > fashion as well. Should I work out the details or are we not going to
> > bother with such things? What I said above is a fairly weak attack, so I
> > don;t mind either way.
>
> Ok, here's my first pass at a fix, in terms of ease of current
> implementation.
>
> Whenever you queue a data cell onto an outbuf, with some probability
> (say, 20%) you also queue a padding cell (either in front or behind the
> data cell). If you prefer, we can flip a biased coin, and as long as it
> comes up "padding" we add a padding cell and flip the coin again.

Wow, That's a very different solution.... You are not padding to constant
badwidth any more as before. And you are sending a lot less dummy traffic.
Let me just observe that you did not *need* to do the above to solve the
problem you outlined above which I summarise as ("If network congested and
we keep queuing dummies regardless, latency increases and our buffers
grow"). I have a solution to this problem below. As to the properties of
the above dummy policy, a little thought is required!

Thought: The above policy does not hide the real bandwidth as it just puts
out some (20% of the real traffic) dummies onto the network. This does not
seem to achieve what we want (What do we want, I assume it is to hide the
real traffic bandwidth?)... Again, if this is the policy, then you do not
pad links on which no data is travelling.... Serious discussion is
required on this. All come back to the threat model... Paul?

> Would that address your issue? But yes, more broadly, I think Paul's
> right that we shouldn't be solving some of the timing issues while
> leaving others wide open. :)

That would not be the way that I would do it. Here is my solution:

Have a buffer which stores everything that is ready to be sent on the
network, and run the old algorithm (i.e. every tick queue a data cell if
it's available or padding), except you watch the buffer and if it is over
eg 3 seconds's worth of cells and is growing, every time you are about to
insert a dummy cell flip a 1/2, 1/2 coin. If the buffer has grown and is
still growing, increase the bias of coin.... I would be happy to implement
this, but given my current commitments, this might take a few weeks which
is too slow, I expect.

A.
------------------
Andrei Serjantov
Queens' College
Cambridge CB3 9ET
http://www.cl.cam.ac.uk/~aas23/