[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-bugs] #23817 [Core Tor/Tor]: Tor re-tries directory mirrors that it knows are missing microdescriptors



#23817: Tor re-tries directory mirrors that it knows are missing microdescriptors
----------------------------------------+----------------------------------
 Reporter:  teor                        |          Owner:  (none)
     Type:  defect                      |         Status:  new
 Priority:  Medium                      |      Milestone:  Tor:
                                        |  0.3.3.x-final
Component:  Core Tor/Tor                |        Version:
 Severity:  Normal                      |     Resolution:
 Keywords:  tor-guard, tor-hs, prop224  |  Actual Points:
Parent ID:  #21969                      |         Points:
 Reviewer:                              |        Sponsor:
----------------------------------------+----------------------------------

Comment (by asn):

 Here is an implementation plan of the failure cache idea from comment:4 .

 First of all, the interface of the failure cache:

   We introduce a `digest256map_t *md_fetch_fail_cache` which maps the
 256-bit md hash to a smartlist of dirguards thru which we failed to fetch
 the md.

 Now the code logic:

 1) We populate `md_fetch_fail_cache` with dirguards in
 `dir_microdesc_download_failed()`.  We remove them from the failure cache
 in `microdescs_add_to_cache()` when we successfuly add an md to the cache.

 2) We add another `entry_guard_restriction_t` restriction type in
 `guards_choose_dirguard()`. We currently have one restriction type which
 is designed to restrict guard nodes based on the exit node choice and its
 family. We want another type which uses a smartlist and restricts
 dirguards based on whether we have failed to fetch an md from that
 dirguard. We are gonna use this in step 3.

 3) In `directory_get_from_dirserver()` we query the md failure cache and
 pass any results to `directory_pick_generic_dirserver()` and then to
 `guards_choose_dirguard()` which uses the new restriction type to block
 previously failed dirguards from being selected.

 How does this feel like to you?

 There are two more steps we might want to do:

 * When we find that we are missing descs for our primary guards, we order
 an immediate download of the missing descs so that the above algorithm
 takes place.

 * If we fail to fetch the mds from all of our primary guards, we retry
 using fallback directories instead of trying deeper down our guard list.
 Teor suggested this, but it seems to be far from trivial to implement
 given the interface of our guard subsystem. If it provides big benefit we
 should consider it.

--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/23817#comment:6>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs