I'm not sure if Tor is looking for alternative transport protocols like QUIC.
What if it's a lot faster than TCP on Tor?Â
Â
One of the issues is that any modified client is easy to fingerprint.
So, as with IPv6, we'd need relays to run QUIC and TCP in parallel for some time, then clients could optionally use QUIC when there were enough relays supporting it. Perhaps relays could open a QUIC UDP port on the same port as their TCP ORPort, and then advertise support in their descriptors. But TCP would remain the default for the foreseeable future.
For example, our IPv6 adoption is still at the stage where clients need to be explicitly configured to use it.
(And parts of it are only coming out in 0.2.8.)
If your modifications don't work like this, then it would be very hard for us to adopt them.
It does work like this. Our testing version has "parallel codepath" and supports both QUIC and TCP. And we devised our QUIC API to look almost exactly like the traditional UNIX socket API. So, code change is almost minimal.Â
Â
Even if they did, I don't know if they solve any pressing issues for us.Â
What about the head-of-line blocking issue and the congestion control issue
raised in 2009? From
this paper, it seems they haven't been completely solved.Â
Â
(And we'd need both a theoretical security analysis, and a code review. And new features come with new risks and new bugs.)
Of course! We don't expect Tor to suddenly start using QUIC because of a couple of emails. But I believe we do have something to argue for QUIC based on both theories and experimental results. We would probably make a formal, published argument soon.Â
I've given you credit for reporting this issue, please feel free to provide your preferred name (or decline) on the ticket.
Thanks!Â
About the issue, I've checkout the 0.2.8 commit and tested on that. The problem is still there so I looked deeper into it. I've run it many time and it seems like once I start restricting path, it becomesÂundeterministic whether the bootstrap will succeed. And I think it might have something to do with the cache-microdesc-consensus file fetched by that client. Just for recap, I'm running a network with 11 nodes (2 relays) and 2 clients who have path restriction. My observations are:Â
- Each client will have aÂcache-microdesc-consensus file with 4 relays in it. relay 0, 1 and 2 will always be there and the last one changes each time I start the network.Â
- When the all 3 nodes on the restricted path are on theÂcache-microdesc-consensus file, the bootstrap will succeed quickly. For example, if my path is restricted to R2->R3->R1, since 0, 1 and 2 are always present in the consensus, whenever R3 is there, the bootstrap will work.Â
- When one of the node is not on the consensus, the bootstrap will be stuck and never reach 100%. Depending on which node of the path is not included in the consensus, the error message varies. In the above example, if R3 is not in the consensus, we will fail to connect to hop 1 (assume 0-based logging).Â
- I waited for a long time (~30min) and nothing would improve: consensus does not contain more nodes and bootstrap would still be stuck.Â
I think the root of the problem might be the consensus having too few nodes..ÂIs it normal for a cache-microdesc-consensus file to only have 4 nodes in a 11-node network? Should I look into how the code that generate the consensus?Â
The routerlist_t I mentioned is in routerlist.c, line 124.
But now I think this probably just stores the same info as theÂcache-microdesc-consensus file, right?Â
Hmm, then it's likely a configuration issue with your network.
Shouldn't chutney also fail if it is a configuration issue? Or are you saying it's a configuration issue with my underlying network topology?
The only thing different in the torrc files for the chutney run and the Emulab run is "Sandbox 1" and "RunAsDaemon 1" but I don't think they cause any issue?Â
Thanks!
Li.Â
Â