[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-bugs] #32938 [Circumvention/Snowflake]: Have a way to test throughput of snowflake proxy
#32938: Have a way to test throughput of snowflake proxy
-------------------------------------------------+-------------------------
Reporter: cohosh | Owner: cohosh
Type: enhancement | Status:
| needs_revision
Priority: Medium | Milestone:
Component: Circumvention/Snowflake | Version:
Severity: Normal | Resolution:
Keywords: snowflake-webextension, ux-team, | Actual Points: 3
anti-censorship-roadmap-2020Q1 |
Parent ID: #31109 | Points: 5
Reviewer: | Sponsor:
-------------------------------------------------+-------------------------
Changes (by dcf):
* status: needs_review => needs_revision
Comment:
The refactoring in proxy-go, allowing proxy-go to treat the broker and the
bridgestrap mostly equivalently, looks reasonable. I don't see this as
something meant to be secure against adversarial proxies, only
psychological reassurance for honest proxy operators. I really don't think
this function should be rolled into the broker; actually I think the
broker should be more compartmentalized overall. Even if it's running on
the same host, I feel it should be a separate process.
I'm having trouble understanding the control flow between `Snowflake` and
`ThroughputLogger` in bridgestrap. `Snowflake.runTest` calls
`Snowflake.MarkWritten` to store a timestamp in `ThroughputLogger`, then
does `Snowflake.Write` which results in calls to `Snowflake.dc.OnMessage`,
which calls `ThroughputLogger.AddBytes`, which writes into a channel read
by `Snowflake.Log`, which then looks at the previously stored timestamp. I
wonder if there's more of a straight-line way to write it.
The high latency you mentioned in comment:9 seems to be a bug. Even in my
localhost test, I get a latency of around 5 seconds. In the
`time.Since(start)` computation, `time.Now()` is increasing faster than
`start` is. In my local test, the difference increased smoothly and
monotonically from 0.04 to 9.66 seconds over 940 iterations in `runTest`.
Maybe there is some kind of buffering happening where packets are "sent"
much faster than they are received; like maybe I can send in 10,000
iterations almost instantly, while really those are being buffered by the
OS and not really sent immediately.
https://gitlab.torproject.org/cohosh/bridgestrap/blob/c76fb1c24eacdeefddab699aa7ac2bf111c5e63f/snowflake.go#L153-160
`AverageLatency` will panic with a division by zero if for whatever reason
`count` is 0 (if there were no messages received). Why round the average
latency to 1 second?
https://gitlab.torproject.org/cohosh/bridgestrap/blob/c76fb1c24eacdeefddab699aa7ac2bf111c5e63f/snowflake.go#L59
The `OnMessage` callback assumes it has at least 4 bytes to work with and
will panic if it does not. The design relies on the blocks sent by
`runTest` retaining message boundaries when they come back into
`OnMessage`, which isn't guaranteed. It's worth thinking about what a
malicious proxy could do by falsifying the `count` value at the beginning
of each buffer. It may be better to do something simpler like: send 10 KB,
receive 10 KB, and only do another iteration once you've received the same
number of bytes that you sent.
I think there's a memory leak in `idToSnowflake` if the test never
completes. A proxy could hit the /api/snowflake-poll route to add entries
to the map and never hit /api/snowflake-test to remove them.
https://gitlab.torproject.org/cohosh/bridgestrap/blob/c76fb1c24eacdeefddab699aa7ac2bf111c5e63f/snowflake.go#L192
[https://golang.org/pkg/math/rand/#Read Rand.Read] is documented never to
return an error, so I would prefer a panic rather than an error return
here.
`APISnowflakeRequest` and `APISnowflakeTest` need a byte limit to prevent
someone sending an infinite JSON object and using up all memory. A read
deadline would make sense, too.
An alternative design would be to reverse the direction of traffic flow.
Let the proxy send data and bridgestrap reflect it. The proxy can compute
its own throughput and latency locally. The bridgestrap part could then be
made stateless except for the offer–answer matching.
Like you, I'm not sure of the long-term utility of the throughput test
feature. Maybe we'll soon see enough organic client use to cause proxies
to actually be used, but that's hard to predict. "Give me a fake client on
demand" could be a useful diagnostic feature to have. Conceivably we could
do the same thing probabilistically in the normal course of operation of
proxies: sometimes you get a real client, sometimes you get a "canary"
client whose only purpose is to allow the proxy to assess its own health.
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/32938#comment:12>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs