[All proposals trimmed; see original email for full details] On Tue, May 16, 2006 at 01:51:56AM +1000, glymr wrote: Hi, Glymr. Some of your ideas are good, and almost none are flatly impossible, but they nearly all are less simple than they initially appear. I'm not trying to shoot any of them down (except #4), but they need more thought before they could be implemented as-is. I'd also like to ask you to read the specs and papers, and search for discussion surrounding these issues on the list archives. Many of these have come up before. > 1. Lower the barrier of entry for server nodes by permitting servers to > go at any speed down to 1.5kb/s. That'd be neat, but: 1) we'd need to solve directory scaling first 2) we'd need to re-do routing so that non-dialup users don't go through these connections. > 2. Make all clients run a 1.5kb/s server as a minimum by default. Again see the "Challenges" paper. > 3. Make all nodes into exit nodes. It's true that we need more exit capacity, but many people aren't willing to run exit nodes. We can use 2 Mb/s of middleman capacity for every 1 Mb/s of exit capacity -- why disallow people from providing it? > 4. [...] > Exit nodes send unencrypted data. If the data were > logged and periodically sent to the local law enforcement they would > definitely be more likely to protect our interests. This is a non-starter. If you want to spy on users, please go away. Wiretapping innocuous information and sending it to the police en masse is not a good way to protect privacy. Some problems are: 1. Privacy-conscious users don't want their info given to anybody enforcement, especially to law enforcement without a warrant. 2. In many decent jurisdictions, you can't simply give away people's personal information willy-nilly. 3. Keeping logs makes a record that is itself a vulnerability. It can get stolen from the server op, or from the people he's sending it to, or in transit. 4. "Law enforcement" is a very broad term. Does it include the Saudi morals police? The Chinese censors? The IRS? 5. It is completely unethical. Even the most privacy-ignorant ISPs (the ones who give away users' information without asking for a warrant) wait to be *asked* before doing so, after all. Something like this will never get built into Tor so long as I'm working on it. > 5. Implement a peer-review type system of protecting against bogus and > modified tor servers. Servers (in my suggestion, everyone) can check to > ensure that any node they connect to actually transmits traffic. Any > node which does not transmit traffic can be assumed to be a modified > server intended to exploit the network in some way. But most of the good attacks on Tor are either passive timing attacks, or attacks that introduce timing signatures. A server that doesn't transmit traffic is probably broken, not attacking the network. BTW, directory authorities already check whether they can build circuits through servers. > The node could also > report this to the directory server, and an exploiting node would be > identifiable by the sheer mass of other nodes reporting its lack of > carriage. This allows a sybil attack to take down good nodes. That's unacceptable. [...] > 6. Load balancing - Instead of only using locality as a criteria for > node selection when generating a circuit, also include a flag that > servers can raise in their directory profile which indicates they are > underutilised 1. Locality *isn't* a criterion for node selection right now. 2. Directory information doesn't propagate much faster than 15-minute intervals at best. 3. If load info is too fine-grained, that's probably an attack vector. But if load info is coarse-grained, we might as well use the current approach. > And finally, three, eliminating the > constant talk of 'cover traffic' which would be unneccessary if the > client side were selecting nodes without activity, thus maintaining a > constant stream of packets through every node. This would do nothing to address end-to-end correlation attacks. > 7. The directory server could have a diff server operating so that it > keeps a progressive history of updates to the directory information, so > that when a node requests an update of the directory, This is about what happens now, except it happens at the client-side: the client learns a summary view, and compares the summary to its last network view in order to tell which servers it needs to know more about. > This may already be implemented, I'm not sure how this part of > the architecture exactly works. Then read dir-spec.txt. > 8. Prioritise the traffic of a user originating connections above those > it relays. Again see the "Challenges" paper. This has anonymity implications. It may be doable, but those anonymity implications need to be solved first. yrs, -- Nick Mathewson
Attachment:
pgpGWGKVn9nLO.pgp
Description: PGP signature