[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-bugs] #29206 [Circumvention/Snowflake]: New design for client -- server protocol for Snowflake



#29206: New design for client -- server protocol for Snowflake
-----------------------------------------------+---------------------------
 Reporter:  cohosh                             |          Owner:  cohosh
     Type:  task                               |         Status:
                                               |  needs_review
 Priority:  Medium                             |      Milestone:
Component:  Circumvention/Snowflake            |        Version:
 Severity:  Normal                             |     Resolution:
 Keywords:  anti-censorship-roadmap-september  |  Actual Points:
Parent ID:                                     |         Points:  6
 Reviewer:  dcf                                |        Sponsor:
                                               |  Sponsor28-must
-----------------------------------------------+---------------------------

Comment (by dcf):

 I would like there to be a paragraph-long or so comment at the top of
 snowflake-proto/proto.go that briefly explains how things work.

 `windowSize = 1500` seems small; it would only allow for ≈1 full-sized
 packet in flight at a time. (I realize `windowSize` is not used yet.)

 Does `SnowflakeConn.Conn` need to be exposed publicly? Maybe
 `NewSnowflakeConn` and `SnowflakeConn.NewSnowflake` should be the only
 ways to modify it externally.

 `LocalAddr`, `RemoteAddr` should return non-nil. They could pass through
 to `s.Conn.LocalAddr` and `s.Conn.RemoteAddr`, but maybe better is to have
 them return an `Addr` type that reflects the `sessionID`, because that is
 our persistent addressing abstraction over ephemeral network connections.
 Like:
 {{{
 type sessionAddr []byte;
 func (addr sessionAddr) Network() string {
         return "session"
 }
 func (addr sessionAddr) String() string {
         return string(addr)
 }
 }}}

 The `header.ack > s.acked` comparison looks like it will have problems
 with overflow. It probably ought to be something like `int32(header.ack -
 s.acked) > 0` instead. Same on the inside, it is probably important to use
 `int32` rather than `int`. There should be a test for it, too.
 {{{
                 if header.ack > s.acked {
                         // remove newly acknowledged bytes from buffer
                         s.buf.Next(int(header.ack - s.acked))
                         s.acked = header.ack
                 }
 }}}

 You don't need an extra check here because
 [https://golang.org/pkg/io/#Writer io.Writer says] "Write must return a
 non-nil error if it returns n < len(p)."
 {{{
                 //TODO: check to make sure we wrote out all of the bytes
 }}}

 > - SnowflakeConn now has a new method to set and reset the underlying
 connection. If there is buffered data, SnowflakeConn will resend that data
 under the same session ID whenever a new underlying connection has been
 specified

 The `NewSnowflake` function writes the locally buffered bytes directly to
 the underlying `Conn`, without prepending new headers as
 `SnowflakeConn.Write` does. I expected it to chop up the buffer into new
 packets and send them with headers (and up-to-date ack numbers).

 `io.ReadFull(r, b)` would be more clear than `io.ReadAtLeast(r, b,
 len(b))`.
 {{{
         b := make([]byte, snowflakeHeaderLen, snowflakeHeaderLen)
         _, err := io.ReadAtLeast(r, b, snowflakeHeaderLen)
 }}}

 I don't think this is worth checking for in `SnowflakeConn.Read`. I would
 just let it crash. Or else, panic instead of returning an error. In any
 case, a check on `s.Conn` would be better in `readLoop` than `Read`,
 because `Read` does not even access `s.Conn`. A better overall design may
 be to have `NewSnowflakeConn` take a `Conn` as a parameter, so that it's
 impossible to create one that's uninitialized.
 {{{
         if s.Conn == nil {
                 return 0, fmt.Errorf("No network connection to read from
 ")
         }
 }}}

 Replying to [comment:23 cohosh]:
 > The next step of course is to allow for resetting the underlying
 connection in SnowflakeConn and using the sessionID to correlate new
 connections with old ones. There's going to have to be some tricky
 refactoring here. Right now when the webrtc connection times out (due to
 staleness), both the webrtc connection and the socks connection are closed
 and the client waits for a new socks connection to open. The SnowflakeConn
 should be persistent across snowflakes (and the way it is currently set up
 perhaps also across SOCKS connections (??)), so the question is where
 SnowflakeConn should "live".
 >
 > I'm thinking of adding a new method to SnowflakeCollector that will set
 (and reset) the underlying connection, and then modifying the
 [https://github.com/cohosh/snowflake/blob/sequencing/client/lib/snowflake.go#L24
 Handler function] to redefine the snowflake rather than closing the SOCKS
 connection and waiting for a new one. This doesn't fit perfectly with what
 I'd assume a SnowflakeCollector does by name, but then again maybe it
 does. This would make the SnowflakeCollector responsible for both getting
 new snowflakes and also deciding which snowflake(s) to send traffic
 through.

 I think there should be a 1:1 correspondence between `SnowflakeConn`s and
 SOCKS connections. `SnowflakeConn` is analogous to the long-lived TCP
 connection to a user's guard. If the `SnowflakeConn` somehow fails
 unrecoverably, we should close the SOCKS connection to signal to tor that
 it's dead.

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/29206#comment:24>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs