[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Padding, multiplexing and more
On Thu, Dec 19, 2002 at 02:37:05PM +0100, Marc Rennhard wrote:
> Using the padding scheme above and
> using fixed bit-rates on the user-first hop links, it performed well
> enough and it is probably very resistant against a global eaves-
> dropper (assuming the users never leave and communicate with a proxy
> all the time to defeat long-term intersection attacks). Against
> active attackers and compromised proxies, this is no longer the
> case, I guess...
Right. My attack obviously works under very low traffic: you compromise
(or simply run) one of the ORs, and then watch who sends you dummies. Each
node in the path, one after another, will send you a dummy. Now you know
the path.
Now, it's not obvious to me that the attack still works when you have
"lots of traffic". Unfortunately, it's not obvious that it stops working,
either. Somebody want to do some analysis here? It seems like it ought
to be within reach.
While I think it might be quite hard to get "many" of the ORs under
your control, I think we have to assume it's possible to get one. So
this is a real issue (to go by Andrei's "how hard is it to be that
adversary?" metric). My intuition is to be scared of notifying other
ORs that something is happening -- because when I inform other ORs about
my actions, they get the effect of partial monitoring without having to
compromise my network.
So it would seem that if we want to get benefit from padding, traffic
levels should be uncorrelated to traffic coming into or out of any of
the links. But I'd love for somebody to show me that I'm wrong. :)
> The main concern here is about linkability, and I fully agree. Why
> not use a compromize: As long as I'm on www.cnn.com, I can happily
> use the same connection with a second level of multiplexing. Yes,
> the adversary learns (a bit) more thant with a new onion per
> connection, but only if we assume he can observe the last OR and
> not the web server. In the latter case, using new connections (and
> new exit ORs) for each web object won't help much as users usually
> navigate using links and the adversary should be able to link all
> objects to the same user by analyzing which page at cnn.com links to
> what other pages. So the compromize is to use a connection as long we
> are in the same domain, but build a new onion and use a new connection
> (with a new exit OR) when we move to another domain. Comments?
Hm. I like it. But there are all sorts of little complexities that
creep in.
For example, right now the client closes connections that have been
idle for 5 minutes and have no circuits on them. Since most people use
short-term circuits (eg for web browsing, rather than an all-night ssh
session), we close the connection a little while after the last circuit
closes, so the ORs don't get overloaded with idle sockets. (If they've
been idle for 5 minutes but still have a circuit in use, then we send a
padding cell. Otherwise a firewall may expire their connection and the
client will never know --- and then future cells on that connection will
simply disappear.)
But now all circuits will be long-term rather than short-term circuits,
since we can't predict when the user will want to talk to cnn.com
again. We could time out the circuit after 30 minutes of inactivity
(either judged because nothing has gone through that circuit in 30
minutes, or because the user has made no requests of any kind in 30
minutes). I'm worried that that really won't save us so much -- a user
pulling up a website that draws from lots of different places still
uses lots of onions. Further, what about sites like doubleclick? Their
job is to correlate your movement across various domains. If you have
separate onions for a variety of sites but each of them causes you to
talk to doubleclick when you use them... are you allowing doubleclick
(or somebody observing doubleclick) to correlate which exit nodes are you?
If this doubleclick attack does work, does it not matter because our
users will all be using privoxy and it blocks sites like that?
I need to think about this more. It might be ok, or might be scary. Let
me know what you think.
And while I'm at it, here's a counter-compromise which might help draw out
the issues: rather than one circuit per site, how about one circuit per
5 minutes? You use that circuit for all new connections in that period,
but once it's over when you want a new connection you make a new circuit
(which again is used for at most 5 minutes). There are also profiling
opportunities here, but they seem different.
It sounds like maybe it's time to go find those "pseudonymous profiles"
papers I keep seeing, and actually read them.
--Roger