[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-bugs] #21534 [Core Tor/Tor]: "Client asked me to extend back to the previous hop" in small networks



#21534: "Client asked me to extend back to the previous hop" in small networks
-------------------------------------------------+-------------------------
 Reporter:  teor                                 |          Owner:  (none)
     Type:  defect                               |         Status:  new
 Priority:  Very High                            |      Milestone:  Tor:
                                                 |  0.3.2.x-final
Component:  Core Tor/Tor                         |        Version:
 Severity:  Normal                               |     Resolution:
 Keywords:  regression?, guard-selection,        |  Actual Points:
  dirauth                                        |
Parent ID:  #21573                               |         Points:  1
 Reviewer:                                       |        Sponsor:
-------------------------------------------------+-------------------------
Changes (by dgoulet):

 * priority:  Medium => Very High
 * keywords:  regression?, guard-selection => regression?, guard-selection,
     dirauth


Comment:

 I can still hit that on master. This seems to be caused only by authority
 who can't pick a node for a circuit and this shows up:

 {{{
 Nov 16 09:27:50.380 [info] router_choose_random_node(): We couldn't find
 any live, fast, stable, guard routers; falling back to list of all
 routers.
 }}}

 I've added more logging and when they need a Guard, no nodes are
 considered Guards by any of the authorities leading to a probability of
 picking the same nodes twice in the path selection because tor is falling
 back to all nodes. Below are logs I've added within
 `nodes_is_unreliable()` called by
 `router_choose_random_node()->router_add_running_nodes_to_smartlist()`:

 {{{
 Nov 16 10:07:24.018 [warn] Node
 $B6813ACD5E30C9560CB8F3CAAE08EB1A9643FFE7~test002a at 127.0.0.1 is stable:
 1, is fast: 1, is possible guard: 0. We needed: Uptime, Capacity, Guard
 Nov 16 10:07:24.018 [warn] node
 $B6813ACD5E30C9560CB8F3CAAE08EB1A9643FFE7~test002a at 127.0.0.1 is
 unreliable
 Nov 16 10:07:24.018 [warn] Node
 $24F57B943178DA7D1351F9566FDBD0620B2921CF~test000a at 127.0.0.1 is stable:
 1, is fast: 1, is possible guard: 0. We needed: Uptime, Capacity, Guard
 Nov 16 10:07:24.018 [warn] node
 $24F57B943178DA7D1351F9566FDBD0620B2921CF~test000a at 127.0.0.1 is
 unreliable
 Nov 16 10:07:24.018 [warn] Node
 $A183E34C4F3465994DC8D69378A05F1B43141AF3~test003ba at 127.0.0.1 is
 stable: 1, is fast: 1, is possible guard: 0. We needed: Uptime, Capacity,
 Guard
 Nov 16 10:07:24.018 [warn] node
 $A183E34C4F3465994DC8D69378A05F1B43141AF3~test003ba at 127.0.0.1 is
 unreliable
 Nov 16 10:07:24.018 [warn] node
 $492A22ABAD8203EA2B6A10076F251AC50AB1EFE0~test001a at 127.0.0.1
 (is_runnig: 0, is_valid: 1)
 Nov 16 10:07:24.018 [warn] Node
 $01D04CBA14565AA7EFC4612F5B388B07802475AC~test004r at 127.0.0.1 is stable:
 0, is fast: 0, is possible guard: 0. We needed: Uptime, Capacity, Guard
 Nov 16 10:07:24.018 [warn] node
 $01D04CBA14565AA7EFC4612F5B388B07802475AC~test004r at 127.0.0.1 is
 unreliable
 Nov 16 10:07:24.018 [warn] Node
 $70CAC7E209998F7B073F3C13950BDE2787231D18~test005r at 127.0.0.1 is stable:
 0, is fast: 0, is possible guard: 0. We needed: Uptime, Capacity, Guard
 Nov 16 10:07:24.018 [warn] node
 $70CAC7E209998F7B073F3C13950BDE2787231D18~test005r at 127.0.0.1 is
 unreliable
 Nov 16 10:07:24.018 [warn] Node
 $2836767EF7120AF306B8A1AE87E364073334B247~test006r at 127.0.0.1 is stable:
 0, is fast: 0, is possible guard: 0. We needed: Uptime, Capacity, Guard
 Nov 16 10:07:24.018 [warn] node
 $2836767EF7120AF306B8A1AE87E364073334B247~test006r at 127.0.0.1 is
 unreliable
 Nov 16 10:07:24.018 [warn] Node
 $D1657CB2A8D479F2D5D617819326C49FBB6D1133~test007r at 127.0.0.1 is stable:
 0, is fast: 0, is possible guard: 0. We needed: Uptime, Capacity, Guard
 Nov 16 10:07:24.018 [warn] node
 $D1657CB2A8D479F2D5D617819326C49FBB6D1133~test007r at 127.0.0.1 is
 unreliable
 }}}

 You can see that all `is_possible_guard` equals 0 when "Guard" was needed
 (or `need_guard = 1`). Which btw leads to this warning on the other relays
 when this happens:

 {{{
 Nov 16 10:19:16.664 [warn] connection_edge_process_relay_cell (away from
 origin) failed.
 Nov 16 10:19:16.664 [warn] circuit_receive_relay_cell (forward) failed.
 Closing.
 }}}

 That is something I've been seeing quite a bit on my test relay recently
 on the real network. So I think this might affect authorities outside
 `TestingNetwork`.

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/21534#comment:6>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs