On Thu, Mar 03, 2005 at 08:59:56AM +0100, Martin Balvers wrote: [...] > My server uses about a 100 kb/s but i guess a big part of the bandwidth is > used for middleman or entry node activities. > Since there are an increasing number of server ops that change their > server from an exit node to a middleman node, the stress on the remaining > exit nodes increases. > > Maybe the latency of the network can improve if we have exclusive exit nodes. > I have no idea if this will work. I don't know where or what the current > bottleneck of the network is. Hm. This is not a bad idea. (Or at least, if it is a bad idea, it is not obviously a bad idea.) Roger and I have discussed this in the past, but haven't yet gotten around to it. One issue is that we would need to change the client code so that they could recognize exit-only servers: as it stands, clients would try to use them as middleman servers and get confused. But if we're changing client code anyway, we could just have *clients* do the right thing, and preferentially avoid using exits at intermediate positions on their circuits. Clients would have an incentive to do this anyway, since middleman nodes (as you say) are less likely to be overloaded, as things now stand on the network. Of course, there could be some anonymity issues here. None leap right to mind, but hey. Anyway, in the long term, we need better solutions to incentive problems. I'll drop another message about that in a second... Yours, -- Nick Mathewson
Attachment:
pgp0bxK5seAWQ.pgp
Description: PGP signature