[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-bugs] #33519 [Circumvention/Snowflake]: Support multiple simultaneous SOCKS connections
#33519: Support multiple simultaneous SOCKS connections
-------------------------------------+------------------------
Reporter: dcf | Owner: (none)
Type: defect | Status: new
Priority: Medium | Milestone:
Component: Circumvention/Snowflake | Version:
Severity: Normal | Resolution:
Keywords: turbotunnel | Actual Points:
Parent ID: | Points:
Reviewer: | Sponsor:
-------------------------------------+------------------------
Comment (by dcf):
Here's a candidate patch, for the QUIC branch at least.
https://gitweb.torproject.org/user/dcf/snowflake.git/commit/?h
=turbotunnel-quic&id=c07666c1408bcf1204f3318aa67c7623e16188db
It's basically option (3) from the ticket description. It sets up one
`PacketConn` in main, and starts a new QUIC connection over it for each
new SOCKS connection. It turns out that it doesn't even require the
overhead of attaching a ClientID to every packet. We can just prefix each
WebRTC connection with the ClientID as before, and the QUIC connection ID
takes care of disambiguating the multiple virtual connections.
The easiest way to test it is to start two tor clients, one that manages a
snowflake-client, and one that uses the same snowflake-client as the first
one. Edit client/snowflake.go and make it listen on a static port:
{{{
ln, err := pt.ListenSocks("tcp", "127.0.0.1:5555")
}}}
Create files torrc.1 and torrc.2:
{{{
UseBridges 1
DataDirectory datadir.1
ClientTransportPlugin snowflake exec client/client -url
http://127.0.0.1:8000/ -ice stun:stun.l.google.com:19302 -log
snowflake.log -max 1
Bridge snowflake 0.0.3.0:1
}}}
{{{
UseBridges 1
DataDirectory datadir.2
ClientTransportPlugin snowflake socks5 127.0.0.1:5555
Bridge snowflake 0.0.3.0:1
}}}
Fire everything up:
{{{
broker/broker --disable-tls --addr 127.0.0.1:8000
proxy-go/proxy-go -broker http://127.0.0.1:8000/
tor -f torrc.1
tor -f torrc.2
}}}
If you run this test before the changes I'm talking about, the second tor
will be starved of a proxy. If you run it with the changes, both tors will
share one proxy.
----
Unfortunately the same idea doesn't carry over directly into the KCP
branch. There are at least two impediments:
* https://github.com/xtaci/kcp-go/issues/165\\
kcp-go assumes that you will use a `PacketConn` for at most one KCP
connection: it closes the underlying `PacketConn` when the KCP connection
is closed, so you can't reuse for more connections. This one is easy to
work around.
* https://github.com/xtaci/kcp-go/issues/166\\
The kcp-go server only supports one KCP connection per client address
(which would ordinarily be an IP:port but in our case is a 64-bit
ClientID). This one requires a change to kcp-go to fix.
The alternative for KCP is option (4): use one global `PacketConn` and one
global KCP connection. Each new SOCKS connection gets a new smux stream on
the same KCP connection. I was reluctant to do this in the QUIC branch
because the quic-go connection type ([https://godoc.org/github.com/lucas-
clemente/quic-go#Session quic.Session]) is a stateful, failure-prone
entity. It has its own timeout and can fail (conceptually) at any time,
and if it's a global object, what then? The analogous type in kcp-go,
[https://godoc.org/github.com/xtaci/kcp-go#UDPSession kcp.UDPSession], is
simpler, but I think we will cannot guarantee a single one will survive
for the lifetime of the process. So we'd have to introduce an abstraction
to manage the global shared `kcp.UDPSession` and restart it if it dies.
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/33519#comment:3>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs