[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

[tor-bugs] #12428 [meek]: Make it possible to have multiple requests and responses in flight



#12428: Make it possible to have multiple requests and responses in flight
-------------------------+---------------------
 Reporter:  dcf          |          Owner:  dcf
     Type:  enhancement  |         Status:  new
 Priority:  normal       |      Milestone:
Component:  meek         |        Version:
 Keywords:               |  Actual Points:
Parent ID:               |         Points:
-------------------------+---------------------
 meek segments a data stream into multiple HTTP requestâresponse pairs. In
 order to keep the segments in order, meek-client strictly serializes
 requests: it won't issue a second request until after it receives the
 response to its first request, even if there is buffered data waiting to
 be sent.

 The limit of one latent requestâresponse is restricting possible
 throughput. For instance, if a user is located 200Âms from App Engine, and
 receives up to 64ÂKB per request, then their downstream throughput can be
 no greater than 64ÂKB/200Âms = 320ÂKB/s, even if everything after App
 Engine were instantaneous. Longer delays lead to even lower throughput.

 The problem is how to deal with out-of-order arrivals, and retransmissions
 when an HTTP transaction fails. My plan is to add sequence numbers and
 acknowledgements to upstream and downstream HTTP headers, similar to what
 we did in OSS (https://www.bamsoftware.com/papers/oss.pdf section 4). The
 seq number is the index of the first byte of a payload in the overall
 stream. The ack number is the index of the next byte we're expecting from
 the other side. We can implement this idea in a backward-compatible way,
 by having the server guess in seq and ack fields when they are missing;
 old clients that serialize will continue to work.

 There's a complication related to the protocol's polling nature. During a
 big download, we want multiple downstream responses to be in flight. In
 order to get that, we need to speculatively send a bunch of requests and
 see if they get responses that have data. My thinking is to do something
 like [https://en.wikipedia.org/wiki/TCP_congestion-avoidance_algorithm TCP
 congestion avoidance], where we increment the number of speculative probes
 we send by 1 every time we get a response back with data. Maybe only when
 we get a full-sized response. Reset the number to 1 when there is a loss
 event.

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/12428>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs