[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: Timing attacks from a user's point of view
On Wed, 25 Nov 2009 16:30 -0500, "Xinwen Fu" <xinwenfu@xxxxxxxxx> wrote:
> I guess the approach will not be quite useful.
> 1. Delay is a big enemy of Tor. Read
> http://www.cs.uml.edu/%7Exinwenfu/paper/IPDPS08_Fu.pdf. How much delay is
> problem too.
> 2. An attack can be dynamic against your mechanism by varying the
> of the attack. We already tested the impact of using various batching and
> reordering on attacks. Read
> http://www.cs.uml.edu/%7Exinwenfu/paper/SP07_Fu.pdf. Basically, it is of
> much use.
> 3. Many people still talk about reordering. Reordering cannot be used for
> TCP at all. It kills the performance. Read
> Xinwen Fu
1, 3. I realize that delays are no good for TCP and, in particular,
common packet delaying techniques cause massive packet reordering. I am
also ready to believe that reordering at the OR level is no better.
Moreover, the latter is completely out of a user's scope as that needs a
routers help. However, the idea was to introduce some random delay
*before* the entry node (e.g. at the client) to obtain some resistance
to the Evans, Dingledine et al. congestion attack while losing some
2. As I can understand, that paper suggests a kind of traffic
watermarking and requires the attacker to observe both the src--entry
and exit--dst links. Does not this mean that the attacker should be a
global observer? The congestion attack, however, can be mounted by a
comparatively weak adversary observing traffic for her own nodes only.
Which parameters of the congestion attack should be varied to defeat the
Our hope was to make an adversary spend more time to identify the
circuit reliably than it does exist for. According to , in 2008 it
took only 3m to identify the nodes (provided that the attacker can
initiate client connections every second). However, since the entry
nodes are taken from a very little set, it seems that the circuit time
to live is not a big obstacle for the attacker. Surely, *any* reliable
estimation of how long the adversary will be delayed for needs
My question was if it is (at least) plausible that not-too-high variance
delays at the client's site can sufficiently mitigate the issue.
I suppose that from a user's point of view, the only fruitful way is
trying to disable any protocol specific features facilitating the attack
(such as JSs, refresh tags, etc. for HTTP). Nevertheless, a malicious
HTTP-server or exit still have lots of opportunities to force an unaware
client application to make plenty of connections with predictable
It is also a good question how the recent changes in the alpha influence
the attack in question. If clients choose the circuit of minimal build
time, does it make delay variances less noisy and the congestion attack
results more reliable? Or if clients dynamically discarded the circuits
that had become much "slower" than they were (i.e. that might be under
attack), could it be better?
http://www.fastmail.fm - Access all of your messages and folders
wherever you are
To unsubscribe, send an e-mail to majordomo@xxxxxxxxxxxxxx with
unsubscribe or-talk in the body. http://archives.seul.org/or/talk/