[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

[tor-commits] [tor/master] Remove fallback scripts and whitelist



commit 2dd23086f13ecfc9843a9612e208a3592ac46141
Author: Nick Mathewson <nickm@xxxxxxxxxxxxxx>
Date:   Tue Jan 15 19:18:00 2019 -0500

    Remove fallback scripts and whitelist
    
    They have been extracted to a new fallback-scripts.git repository.
    
    Closes ticket 27914.
---
 changes/ticket27914                       |    4 +
 scripts/maint/fallback.whitelist          | 1064 -------------
 scripts/maint/generateFallbackDirLine.py  |   38 -
 scripts/maint/lookupFallbackDirContact.py |   28 -
 scripts/maint/updateFallbackDirs.py       | 2383 -----------------------------
 5 files changed, 4 insertions(+), 3513 deletions(-)

diff --git a/changes/ticket27914 b/changes/ticket27914
new file mode 100644
index 000000000..433e9657a
--- /dev/null
+++ b/changes/ticket27914
@@ -0,0 +1,4 @@
+  o Removed features:
+    - The scripts used to generate and maintain the list of fallback
+      directories have been extracted into a new "fallback-scripts"
+      repository. Closes ticket 27914.
diff --git a/scripts/maint/fallback.whitelist b/scripts/maint/fallback.whitelist
deleted file mode 100644
index 60d3e7bb8..000000000
--- a/scripts/maint/fallback.whitelist
+++ /dev/null
@@ -1,1064 +0,0 @@
-# updateFallbackDirs.py directory mirror whitelist
-#
-# At least one of these keys must match for a directory mirror to be included
-# in the fallback list:
-#   id
-#   ipv4
-#   ipv6
-# The ports and nickname are ignored. Missing or extra ipv6 addresses
-# are ignored.
-#
-# The latest relay details from Onionoo are included in the generated list.
-#
-# To check the hard-coded fallback list (for testing), use:
-# $ updateFallbackDirs.py check_existing
-#
-# If a relay operator wants their relay to be a FallbackDir,
-# enter the following information here:
-# <IPv4>:<DirPort> orport=<ORPort> id=<ID> ( ipv6=[<IPv6>]:<IPv6 ORPort> )?
-# or use:
-# scripts/maint/generateFallbackDirLine.py fingerprint ...
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008362.html
-# https://trac.torproject.org/projects/tor/ticket/22321#comment:22
-78.47.18.110:443 orport=80 id=F8D27B163B9247B232A2EEE68DD8B698695C28DE ipv6=[2a01:4f8:120:4023::110]:80 # fluxe3
-131.188.40.188:1443 orport=80 id=EBE718E1A49EE229071702964F8DB1F318075FF8 ipv6=[2001:638:a000:4140::ffff:188]:80 # fluxe4
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008366.html
-5.39.88.19:9030 orport=9001 id=7CB8C31432A796731EA7B6BF4025548DFEB25E0C ipv6=[2001:41d0:8:9a13::1]:9050
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008370.html
-# https://lists.torproject.org/pipermail/tor-relays/2016-January/008517.html
-# https://lists.torproject.org/pipermail/tor-relays/2016-January/008555.html
-212.47.237.95:9030 orport=9001 id=3F5D8A879C58961BB45A3D26AC41B543B40236D6
-212.47.237.95:9130 orport=9101 id=6FB38EB22E57EF7ED5EF00238F6A48E553735D88
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008372.html
-# IPv6 tunnel available on request (is this a good idea?)
-108.53.208.157:80 orport=443 id=4F0DB7E687FC7C0AE55C8F243DA8B0EB27FBF1F2
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008373.html
-167.114.35.28:9030 orport=9001 id=E65D300F11E1DB12C534B0146BDAB6972F1A8A48
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008378.html
-144.76.14.145:110 orport=143 id=14419131033443AE6E21DA82B0D307F7CAE42BDB ipv6=[2a01:4f8:190:9490::dead]:443
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008379.html
-# Email sent directly to teor, verified using relay contact info
-91.121.84.137:4951 orport=4051 id=6DE61A6F72C1E5418A66BFED80DFB63E4C77668F
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008381.html
-# Sent additional emails to teor with updated relays
-81.7.11.96:9030 orport=9001 id=8FA37B93397015B2BC5A525C908485260BE9F422 # Doedel22
-# 9F5068310818ED7C70B0BC4087AB55CB12CB4377 not found in current consensus
-178.254.19.101:80 orport=443 id=F9246DEF2B653807236DA134F2AEAB103D58ABFE # Freebird31
-178.254.19.101:9030 orport=9001 id=0C475BA4D3AA3C289B716F95954CAD616E50C4E5 # Freebird32
-81.7.14.253:9001 orport=443 id=1AE039EE0B11DB79E4B4B29CBA9F752864A0259E # Ichotolot60
-81.7.11.186:1080 orport=443 id=B86137AE9681701901C6720E55C16805B46BD8E3 # BeastieJoy60
-85.25.213.211:465 orport=80 id=CE47F0356D86CF0A1A2008D97623216D560FB0A8 # BeastieJoy61
-85.25.159.65:995 orport=80 id=52BFADA8BEAA01BA46C8F767F83C18E2FE50C1B9 # BeastieJoy63
-81.7.3.67:993 orport=443 id=A2E6BB5C391CD46B38C55B4329C35304540771F1 # BeastieJoy62
-81.7.14.31:9001 orport=443 id=7600680249A22080ECC6173FBBF64D6FCF330A61 # Ichotolot62
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008382.html
-51.255.33.237:9091 orport=9001 id=A360C21FA87FFA2046D92C17086A6B47E5C68109
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008383.html
-81.7.14.246:80 orport=443 id=CE75BF0972ADD52AF8807602374E495C815DB304 ipv6=[2a02:180:a:51::dead]:443
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008384.html
-# Sent additional email to teor with fingerprint change
-149.202.98.161:80 orport=443 id=FC64CD763F8C1A319BFBBF62551684F4E1E42332 ipv6=[2001:41d0:8:4528::161]:443
-193.111.136.162:80 orport=443 id=C79552275DFCD486B942510EF663ED36ACA1A84B ipv6=[2001:4ba0:cafe:10d0::1]:443
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008416.html
-185.100.84.212:80 orport=443 id=330CD3DB6AD266DC70CDB512B036957D03D9BC59 ipv6=[2a06:1700:0:7::1]:443
-
-# https://lists.torproject.org/pipermail/tor-relays/2015-December/008417.html
-178.16.208.56:80 orport=443 id=2CDCFED0142B28B002E89D305CBA2E26063FADE2 ipv6=[2a00:1c20:4089:1234:cd49:b58a:9ebe:67ec]:443
-178.16.208.57:80 orport=443 id=92CFD9565B24646CAC2D172D3DB503D69E777B8A ipv6=[2a00:1c20:4089:1234:7825:2c5d:1ecd:c66f]:443
-
-# https://lists.torproject.org/pipermail/tor-relays/2016-January/008513.html
-178.62.173.203:9030 orport=9001 id=DD85503F2D1F52EF9EAD621E942298F46CD2FC10 ipv6=[2a03:b0c0:0:1010::a4:b001]:9001
-
-# https://lists.torproject.org/pipermail/tor-relays/2016-January/008534.html
-5.9.110.236:9030 orport=9001 id=0756B7CD4DFC8182BE23143FAC0642F515182CEB ipv6=[2a01:4f8:162:51e2::2]:9001
-
-# https://lists.torproject.org/pipermail/tor-relays/2016-January/008542.html
-178.62.199.226:80 orport=443 id=CBEFF7BA4A4062045133C053F2D70524D8BBE5BE ipv6=[2a03:b0c0:2:d0::b7:5001]:443
-
-# Email sent directly to teor, verified using relay contact info
-94.23.204.175:9030 orport=9001 id=5665A3904C89E22E971305EE8C1997BCA4123C69
-
-# Email sent directly to teor, verified using relay contact info
-171.25.193.77:80 orport=443 id=A10C4F666D27364036B562823E5830BC448E046A ipv6=[2001:67c:289c:3::77]:443
-171.25.193.78:80 orport=443 id=A478E421F83194C114F41E94F95999672AED51FE ipv6=[2001:67c:289c:3::78]:443
-171.25.193.20:80 orport=443 id=DD8BD7307017407FCC36F8D04A688F74A0774C02 ipv6=[2001:67c:289c::20]:443
-# same machine as DD8BD7307017407FCC36F8D04A688F74A0774C02
-171.25.193.25:80 orport=443 id=185663B7C12777F052B2C2D23D7A239D8DA88A0F ipv6=[2001:67c:289c::25]:443
-
-# Email sent directly to teor, verified using relay contact info
-212.47.229.2:9030 orport=9001 id=20462CBA5DA4C2D963567D17D0B7249718114A68 ipv6=[2001:bc8:4400:2100::f03]:9001
-93.115.97.242:9030 orport=9001 id=B5212DB685A2A0FCFBAE425738E478D12361710D
-46.28.109.231:9030 orport=9001 id=F70B7C5CD72D74C7F9F2DC84FA9D20D51BA13610 ipv6=[2a02:2b88:2:1::4205:1]:9001
-
-# Email sent directly to teor, verified using relay contact info
-85.235.250.88:80 orport=443 id=72B2B12A3F60408BDBC98C6DF53988D3A0B3F0EE # TykRelay01
-185.96.88.29:80 orport=443 id=86C281AD135058238D7A337D546C902BE8505DDE # TykRelay051
-# This fallback opted-in in previous releases, then changed its details,
-# and so we blacklisted it. Now we want to whitelist changes.
-# Assume details update is permanent
-185.96.180.29:80 orport=443 id=F93D8F37E35C390BCAD9F9069E13085B745EC216 # TykRelay06
-
-# Email sent directly to teor, verified using relay contact info
-185.11.180.67:80 orport=9001 id=794D8EA8343A4E820320265D05D4FA83AB6D1778
-
-# Email sent directly to teor, verified using relay contact info
-178.16.208.62:80 orport=443 id=5CF8AFA5E4B0BB88942A44A3F3AAE08C3BDFD60B ipv6=[2a00:1c20:4089:1234:a6a4:2926:d0af:dfee]:443
-46.165.221.166:80 orport=443 id=EE5F897C752D46BCFF531641B853FC6BC78DD4A7
-178.16.208.60:80 orport=443 id=B44FBE5366AD98B46D829754FA4AC599BAE41A6A ipv6=[2a00:1c20:4089:1234:67bc:79f3:61c0:6e49]:443
-178.16.208.55:80 orport=443 id=C4AEA05CF380BAD2230F193E083B8869B4A29937 ipv6=[2a00:1c20:4089:1234:7b2c:11c5:5221:903e]:443
-178.16.208.61:80 orport=443 id=3B52392E2256C35CDCF7801FF898FC88CE6D431A ipv6=[2a00:1c20:4089:1234:2712:a3d0:666b:88a6]:443
-81.89.96.88:80 orport=443 id=55ED4BB49F6D3F36D8D9499BE43500E017A5EF82 ipv6=[2a02:180:1:1:14c5:b0b7:2d7d:5f3a]:443
-209.222.8.196:80 orport=443 id=C86D2F3DEFE287A0EEB28D4887AF14E35C172733 ipv6=[2001:19f0:1620:41c1:426c:5adf:2ed5:4e88]:443
-81.89.96.89:80 orport=443 id=28651F419F5A1CF74511BB500C58112192DD4943 ipv6=[2a02:180:1:1:2ced:24e:32ea:a03b]:443
-46.165.221.166:9030 orport=9001 id=8C7106C880FE8AA1319DD71B59623FCB8914C9F1
-178.16.208.62:80 orport=443 id=5CF8AFA5E4B0BB88942A44A3F3AAE08C3BDFD60B ipv6=[2a00:1c20:4089:1234:a6a4:2926:d0af:dfee]:443"
-46.165.221.166:80 orport=443 id=EE5F897C752D46BCFF531641B853FC6BC78DD4A7
-178.16.208.60:80 orport=443 id=B44FBE5366AD98B46D829754FA4AC599BAE41A6A ipv6=[2a00:1c20:4089:1234:67bc:79f3:61c0:6e49]:443
-178.16.208.55:80 orport=443 id=C4AEA05CF380BAD2230F193E083B8869B4A29937 ipv6=[2a00:1c20:4089:1234:7b2c:11c5:5221:903e]:443
-178.16.208.61:80 orport=443 id=3B52392E2256C35CDCF7801FF898FC88CE6D431A ipv6=[2a00:1c20:4089:1234:2712:a3d0:666b:88a6]:443
-81.89.96.88:80 orport=443 id=55ED4BB49F6D3F36D8D9499BE43500E017A5EF82 ipv6=[2a02:180:1:1:14c5:b0b7:2d7d:5f3a]:443
-209.222.8.196:80 orport=443 id=C86D2F3DEFE287A0EEB28D4887AF14E35C172733 ipv6=[2001:19f0:1620:41c1:426c:5adf:2ed5:4e88]:443
-81.89.96.89:80 orport=443 id=28651F419F5A1CF74511BB500C58112192DD4943 ipv6=[2a02:180:1:1:2ced:24e:32ea:a03b]:443
-46.165.221.166:9030 orport=9001 id=8C7106C880FE8AA1319DD71B59623FCB8914C9F1
-178.16.208.56:80 orport=443 id=2CDCFED0142B28B002E89D305CBA2E26063FADE2 ipv6=[2a00:1c20:4089:1234:cd49:b58a:9ebe:67ec]:443
-178.16.208.58:80 orport=443 id=A4C98CEA3F34E05299417E9F885A642C88EF6029 ipv6=[2a00:1c20:4089:1234:cdae:1b3e:cc38:3d45]:443
-178.16.208.57:80 orport=443 id=92CFD9565B24646CAC2D172D3DB503D69E777B8A ipv6=[2a00:1c20:4089:1234:7825:2c5d:1ecd:c66f]:443
-178.16.208.59:80 orport=443 id=136F9299A5009A4E0E96494E723BDB556FB0A26B ipv6=[2a00:1c20:4089:1234:bff6:e1bb:1ce3:8dc6]:443
-
-# Email sent directly to teor, verified using relay contact info
-5.39.76.158:80 orport=443 id=C41F60F8B00E7FEF5CCC5BC6BB514CA1B8AAB651
-
-# Email sent directly to teor, verified using relay contact info
-109.163.234.2:80 orport=443 id=14F92FF956105932E9DEC5B82A7778A0B1BD9A52
-109.163.234.4:80 orport=443 id=4888770464F0E900EFEF1BA181EA873D13F7713C
-109.163.234.5:80 orport=443 id=5EB8D862E70981B8690DEDEF546789E26AB2BD24
-109.163.234.7:80 orport=443 id=23038A7F2845EBA2234ECD6651BD4A7762F51B18
-109.163.234.8:80 orport=443 id=0818DAE0E2DDF795AEDEAC60B15E71901084F281
-109.163.234.9:80 orport=443 id=ABF7FBF389C9A747938B639B20E80620B460B2A9
-62.102.148.67:80 orport=443 id=4A0C3E177AF684581EF780981AEAF51A98A6B5CF
-# Assume details update is permanent
-77.247.181.166:80 orport=443 id=77131D7E2EC1CA9B8D737502256DA9103599CE51 # CriticalMass
-77.247.181.164:80 orport=443 id=204DFD2A2C6A0DC1FA0EACB495218E0B661704FD # HaveHeart
-77.247.181.162:80 orport=443 id=7BFB908A3AA5B491DA4CA72CCBEE0E1F2A939B55 # sofia
-
-# https://twitter.com/biotimylated/status/718994247500718080
-212.47.252.149:9030 orport=9001 id=2CAC39BAA996791CEFAADC9D4754D65AF5EB77C0
-
-# Email sent directly to teor, verified using relay contact info
-46.165.230.5:80 orport=443 id=A0F06C2FADF88D3A39AA3072B406F09D7095AC9E
-
-# Email sent directly to teor, verified using relay contact info
-94.242.246.24:23 orport=8080 id=EC116BCB80565A408CE67F8EC3FE3B0B02C3A065 ipv6=[2a01:608:ffff:ff07::1:24]:9004
-94.242.246.23:443 orport=9001 id=F65E0196C94DFFF48AFBF2F5F9E3E19AAE583FD0 ipv6=[2a01:608:ffff:ff07::1:23]:9003
-85.248.227.164:444 orport=9002 id=B84F248233FEA90CAD439F292556A3139F6E1B82 ipv6=[2a00:1298:8011:212::164]:9004
-85.248.227.163:443 orport=9001 id=C793AB88565DDD3C9E4C6F15CCB9D8C7EF964CE9 ipv6=[2a00:1298:8011:212::163]:9003
-
-# Email sent directly to teor, verified using relay contact info
-148.251.190.229:9030 orport=9010 id=BF0FB582E37F738CD33C3651125F2772705BB8E8 ipv6=[2a01:4f8:211:c68::2]:9010
-
-# Email sent directly to teor, verified using relay contact info
-5.79.68.161:81 orport=443 id=9030DCF419F6E2FBF84F63CBACBA0097B06F557E ipv6=[2001:1af8:4700:a012:1::1]:443
-5.79.68.161:9030 orport=9001 id=B7EC0C02D7D9F1E31B0C251A6B058880778A0CD1 ipv6=[2001:1af8:4700:a012:1::1]:9001
-
-# Email sent directly to teor, verified using relay contact info
-62.210.92.11:9030 orport=9001 id=0266B0660F3F20A7D1F3D8335931C95EF50F6C6B ipv6=[2001:bc8:338c::1]:9001
-62.210.92.11:9130 orport=9101 id=387B065A38E4DAA16D9D41C2964ECBC4B31D30FF ipv6=[2001:bc8:338c::1]:9101
-
-# Email sent directly to teor, verified using relay contact info
-188.165.194.195:9030 orport=9001 id=49E7AD01BB96F6FE3AB8C3B15BD2470B150354DF
-
-# Message sent directly to teor, verified using relay contact info
-95.215.44.110:80 orport=443 id=D56AA4A1AA71961F5279FB70A6DCF7AD7B993EB5
-95.215.44.122:80 orport=443 id=998D8FE06B867AA3F8D257A7D28FFF16964D53E2
-95.215.44.111:80 orport=443 id=A7C7FD510B20BC8BE8F2A1D911364E1A23FBD09F
-
-# Email sent directly to teor, verified using relay contact info
-86.59.119.88:80 orport=443 id=ACD889D86E02EDDAB1AFD81F598C0936238DC6D0
-86.59.119.83:80 orport=443 id=FC9AC8EA0160D88BCCFDE066940D7DD9FA45495B
-
-# Email sent directly to teor, verified using relay contact info
-193.11.164.243:9030 orport=9001 id=FFA72BD683BC2FCF988356E6BEC1E490F313FB07 ipv6=[2001:6b0:7:125::243]:9001
-109.105.109.162:52860 orport=60784 id=32EE911D968BE3E016ECA572BB1ED0A9EE43FC2F ipv6=[2001:948:7:2::163]:5001
-
-# Email sent directly to teor, verified using relay contact info
-146.0.32.144:9030 orport=9001 id=35E8B344F661F4F2E68B17648F35798B44672D7E
-
-# Email sent directly to teor, verified using relay contact info
-46.252.26.2:45212 orport=49991 id=E589316576A399C511A9781A73DA4545640B479D
-
-# Email sent directly to teor, verified using relay contact info
-89.187.142.208:80 orport=443 id=64186650FFE4469EBBE52B644AE543864D32F43C
-
-# Email sent directly to teor
-# Assume details update is permanent
-212.51.134.123:9030 orport=9001 id=50586E25BE067FD1F739998550EDDCB1A14CA5B2 # Jans
-
-# Email sent directly to teor, verified using relay contact info
-46.101.143.173:80 orport=443 id=F960DF50F0FD4075AC9B505C1D4FFC8384C490FB
-
-# Email sent directly to teor, verified using relay contact info
-193.171.202.146:9030 orport=9001 id=01A9258A46E97FF8B2CAC7910577862C14F2C524
-
-# Email sent directly to teor, verified using relay contact info
-# Assume details update is permanent
-197.231.221.211:9030 orport=443 id=BC630CBBB518BE7E9F4E09712AB0269E9DC7D626 # IPredator
-
-# Email sent directly to teor, verified using relay contact info
-185.61.138.18:8080 orport=4443 id=2541759BEC04D37811C2209A88E863320271EC9C
-
-# Email sent directly to teor, verified using relay contact info
-193.11.114.45:9031 orport=9002 id=80AAF8D5956A43C197104CEF2550CD42D165C6FB
-193.11.114.43:9030 orport=9001 id=12AD30E5D25AA67F519780E2111E611A455FDC89 ipv6=[2001:6b0:30:1000::99]:9050
-193.11.114.46:9032 orport=9003 id=B83DC1558F0D34353BB992EF93AFEAFDB226A73E
-
-# Email sent directly to teor, verified using relay contact info
-138.201.250.33:9012 orport=9011 id=2BA2C8E96B2590E1072AECE2BDB5C48921BF8510
-
-# Email sent directly to teor, verified using relay contact info
-37.221.162.226:9030 orport=9001 id=D64366987CB39F61AD21DBCF8142FA0577B92811
-
-# Email sent directly to teor, verified using relay contact info
-91.219.237.244:80 orport=443 id=92ECC9E0E2AF81BB954719B189AC362E254AD4A5
-
-# Email sent directly to teor, verified using relay contact info
-185.21.100.50:9030 orport=9001 id=58ED9C9C35E433EE58764D62892B4FFD518A3CD0 ipv6=[2a00:1158:2:cd00:0:74:6f:72]:443
-
-# Email sent directly to teor, verified using relay contact info
-193.35.52.53:9030 orport=9001 id=DAA39FC00B196B353C2A271459C305C429AF09E4
-
-# Email sent directly to teor, verified using relay contact info
-134.119.3.164:9030 orport=9001 id=D1B8AAA98C65F3DF7D8BB3AF881CAEB84A33D8EE
-
-# Email sent directly to teor, verified using relay contact info
-173.212.254.192:31336 orport=31337 id=99E246DB480B313A3012BC3363093CC26CD209C7
-
-# Email sent directly to teor, verified using relay contact info
-178.62.22.36:80 orport=443 id=A0766C0D3A667A3232C7D569DE94A28F9922FCB1 ipv6=[2a03:b0c0:1:d0::174:1]:9050
-188.166.23.127:80 orport=443 id=8672E8A01B4D3FA4C0BBE21C740D4506302EA487 ipv6=[2a03:b0c0:2:d0::27b:7001]:9050
-198.199.64.217:80 orport=443 id=B1D81825CFD7209BD1B4520B040EF5653C204A23 ipv6=[2604:a880:400:d0::1a9:b001]:9050
-159.203.32.149:80 orport=443 id=55C7554AFCEC1062DCBAC93E67B2E03C6F330EFC ipv6=[2604:a880:cad:d0::105:f001]:9050
-
-# Email sent directly to teor, verified using relay contact info
-5.196.31.80:9030 orport=9900 id=DFB2EB472643FAFCD5E73D2E37D51DB67203A695 ipv6=[2001:41d0:52:400::a65]:9900
-
-# Email sent directly to teor, verified using relay contact info
-188.138.112.60:1433 orport=1521 id=C414F28FD2BEC1553024299B31D4E726BEB8E788
-
-# Email sent directly to teor, verified using relay contact info
-213.61.66.118:9031 orport=9001 id=30648BC64CEDB3020F4A405E4AB2A6347FB8FA22
-213.61.66.117:9032 orport=9002 id=6E44A52E3D1FF7683FE5C399C3FB5E912DE1C6B4
-213.61.66.115:9034 orport=9004 id=480CCC94CEA04D2DEABC0D7373868E245D4C2AE2
-213.61.66.116:9033 orport=9003 id=A9DEB920B42B4EC1DE6249034039B06D61F38690
-
-# Email sent directly to teor, verified using relay contact info
-136.243.187.165:9030 orport=443 id=1AC65257D7BFDE7341046625470809693A8ED83E
-
-# Email sent directly to teor, verified using relay contact info
-212.47.230.49:9030 orport=9001 id=3D6D0771E54056AEFC28BB1DE816951F11826E97
-
-# Email sent directly to teor, verified using relay contact info
-192.99.55.69:80 orport=443 id=0682DE15222A4A4A0D67DBA72A8132161992C023
-192.99.59.140:80 orport=443 id=3C9148DA49F20654730FAC83FFF693A4D49D0244
-51.254.215.13:80 orport=443 id=73C30C8ABDD6D9346C822966DE73B9F82CB6178A
-51.254.215.129:80 orport=443 id=7B4491D05144B20AE8519AE784B94F0525A8BB79
-192.99.59.139:80 orport=443 id=82EC878ADA7C205146B9F5193A7310867FAA0D7B
-51.254.215.124:80 orport=443 id=98999EBE89B5FA9AA0C58421F0B46C3D0AF51CBA
-51.254.214.208:80 orport=443 id=C3F0D1417848EAFC41277A73DEB4A9F2AEC23DDF
-192.99.59.141:80 orport=443 id=F45426551795B9DA78BEDB05CD5F2EACED8132E4
-192.99.59.14:80 orport=443 id=161A1B29A37EBF096D2F8A9B1E176D6487FE42AE
-
-# Email sent directly to teor, verified using relay contact info
-151.80.42.103:9030 orport=9001 id=9007C1D8E4F03D506A4A011B907A9E8D04E3C605 ipv6=[2001:41d0:e:f67::114]:9001
-
-# Email sent directly to teor, verified using relay contact info
-176.31.159.231:80 orport=443 id=D5DBCC0B4F029F80C7B8D33F20CF7D97F0423BB1
-176.31.159.230:80 orport=443 id=631748AFB41104D77ADBB7E5CD4F8E8AE876E683
-195.154.79.128:80 orport=443 id=C697612CA5AED06B8D829FCC6065B9287212CB2F
-195.154.9.161:80 orport=443 id=B6295A9960F89BD0C743EEBC5670450EA6A34685
-46.148.18.74:8080 orport=443 id=6CACF0B5F03C779672F3C5C295F37C8D234CA3F7
-
-# Email sent directly to teor, verified using relay contact info
-37.187.102.108:80 orport=443 id=F4263275CF54A6836EE7BD527B1328836A6F06E1 ipv6=[2001:41d0:a:266c::1]:443 # EvilMoe
-212.47.241.21:80 orport=443 id=892F941915F6A0C6E0958E52E0A9685C190CF45C # EvilMoe
-
-# Email sent directly to teor, verified using relay contact info
-212.129.38.254:9030 orport=9001 id=FDF845FC159C0020E2BDDA120C30C5C5038F74B4
-
-# Email sent directly to teor
-37.157.195.87:8030 orport=443 id=12FD624EE73CEF37137C90D38B2406A66F68FAA2 # thanatosCZ
-5.189.169.190:8030 orport=8080 id=8D79F73DCD91FC4F5017422FAC70074D6DB8DD81 # thanatosDE
-
-# Email sent directly to teor, verified using relay contact info
-37.187.7.74:80 orport=443 id=AEA43CB1E47BE5F8051711B2BF01683DB1568E05 ipv6=[2001:41d0:a:74a::1]:443
-
-# Email sent directly to teor, verified using relay contact info
-185.66.250.141:9030 orport=9001 id=B1726B94885CE3AC3910CA8B60622B97B98E2529
-
-# Email sent directly to teor, verified using relay contact info
-# Email sent directly to Phoul
-185.104.120.7:9030 orport=443 id=445F1C853966624FB3CF1E12442570DC553CC2EC ipv6=[2a06:3000::120:7]:443
-185.104.120.2:9030 orport=21 id=518FF8708698E1DA09C823C36D35DF89A2CAD956 ipv6=[2a06:3000::120:2]:443
-185.104.120.4:9030 orport=9001 id=F92B3CB9BBE0CB22409843FB1AE4DBCD5EFAC835 ipv6=[2a06:3000::120:4]:443
-185.104.120.3:9030 orport=21 id=707C1B61AC72227B34487B56D04BAA3BA1179CE8 ipv6=[2a06:3000::120:3]:443
-185.104.120.5:80 orport=443 id=3EBDF84DE3B16F0EBF7D51450F07913A02EFDA6C ipv6=[2a06:3000::120:5]:443
-185.104.120.60:80 orport=443 id=D05C9C7068EB5A45F670D5E38A14907EE6223141 ipv6=[2a06:3000::120:60]:443
-
-
-# Email sent directly to teor, verified using relay contact info
-37.187.102.186:9030 orport=9001 id=489D94333DF66D57FFE34D9D59CC2D97E2CB0053 ipv6=[2001:41d0:a:26ba::1]:9001
-
-# Email sent directly to teor, verified using relay contact info
-198.96.155.3:8080 orport=5001 id=BCEDF6C193AA687AE471B8A22EBF6BC57C2D285E
-
-# Email sent directly to teor, verified using relay contact info
-212.83.154.33:8888 orport=443 id=3C79699D4FBC37DE1A212D5033B56DAE079AC0EF
-212.83.154.33:8080 orport=8443 id=322C6E3A973BC10FC36DE3037AD27BC89F14723B
-
-# Email sent directly to teor, verified using relay contact info
-51.255.41.65:9030 orport=9001 id=9231DF741915AA1630031A93026D88726877E93A
-
-# Email sent directly to teor, verified using relay contact info
-78.142.142.246:80 orport=443 id=5A5E03355C1908EBF424CAF1F3ED70782C0D2F74
-
-# Email sent directly to teor, verified using relay contact info
-195.154.97.91:80 orport=443 id=BD33C50D50DCA2A46AAED54CA319A1EFEBF5D714
-
-# Email sent directly to teor, verified using relay contact info
-62.210.129.246:80 orport=443 id=79E169B25E4C7CE99584F6ED06F379478F23E2B8
-
-# Email sent directly to teor, verified using relay contact info
-5.196.74.215:9030 orport=9001 id=5818055DFBAF0FA7F67E8125FD63E3E7F88E28F6
-
-# Email sent directly to teor, verified using relay contact info
-212.47.233.86:9030 orport=9001 id=B4CAFD9CBFB34EC5DAAC146920DC7DFAFE91EA20
-
-# Email sent directly to teor, verified using relay contact info
-85.214.206.219:9030 orport=9001 id=98F8D5F359949E41DE8DF3DBB1975A86E96A84A0
-
-# Email sent directly to teor, verified using relay contact info
-46.166.170.4:80 orport=443 id=19F42DB047B72C7507F939F5AEA5CD1FA4656205
-46.166.170.5:80 orport=443 id=DA705AD4591E7B4708FA2CAC3D53E81962F3E6F6
-
-# Email sent directly to teor, verified using relay contact info
-5.189.157.56:80 orport=443 id=77F6D6A6B6EAFB8F5DADDC07A918BBF378ED6725
-
-# Email sent directly to teor, verified using relay contact info
-46.28.110.244:80 orport=443 id=9F7D6E6420183C2B76D3CE99624EBC98A21A967E
-185.13.39.197:80 orport=443 id=001524DD403D729F08F7E5D77813EF12756CFA8D
-95.130.12.119:80 orport=443 id=587E0A9552E4274B251F29B5B2673D38442EE4BF
-
-# Email sent directly to teor, verified using relay contact info
-212.129.62.232:80 orport=443 id=B143D439B72D239A419F8DCE07B8A8EB1B486FA7
-
-# Email sent directly to teor, verified using relay contact info
-91.219.237.229:80 orport=443 id=1ECD73B936CB6E6B3CD647CC204F108D9DF2C9F7
-
-# Email sent directly to teor, verified using relay contact info
-178.62.197.82:80 orport=443 id=0D3EBA17E1C78F1E9900BABDB23861D46FCAF163
-
-# Email sent directly to teor, verified using relay contact info
-82.223.21.74:9030 orport=9001 id=7A32C9519D80CA458FC8B034A28F5F6815649A98 ipv6=[2001:470:53e0::cafe]:9050
-
-# Email sent directly to teor, verified using relay contact info
-146.185.177.103:80 orport=9030 id=9EC5E097663862DF861A18C32B37C5F82284B27D
-
-# Email sent directly to teor, verified using relay contact info
-37.187.22.87:9030 orport=9001 id=36B9E7AC1E36B62A9D6F330ABEB6012BA7F0D400 ipv6=[2001:41d0:a:1657::1]:9001
-
-# Email sent directly to teor, verified using relay contact info
-37.59.46.159:9030 orport=9001 id=CBD0D1BD110EC52963082D839AC6A89D0AE243E7
-
-# Email sent directly to teor, verified using relay contact info
-212.47.250.243:9030 orport=9001 id=5B33EDBAEA92F446768B3753549F3B813836D477
-# Confirm with operator before adding these
-#163.172.133.36:9030 orport=9001 id=D8C2BD36F01FA86F4401848A0928C4CB7E5FDFF9
-#158.69.216.70:9030 orport=9001 id=0ACE25A978D4422C742D6BC6345896719BF6A7EB
-
-# Email sent directly to teor, verified using relay contact info
-5.199.142.236:9030 orport=9001 id=F4C0EDAA0BF0F7EC138746F8FEF1CE26C7860265
-
-# Email sent directly to teor, verified using relay contact info
-46.8.249.10:80 orport=443 id=31670150090A7C3513CB7914B9610E786391A95D
-
-# Email sent directly to teor, verified using relay contact info
-144.76.163.93:9030 orport=9001 id=22F08CF09764C4E8982640D77F71ED72FF26A9AC
-
-# Email sent directly to teor, verified using relay contact info
-46.4.24.161:9030 orport=9001 id=DB4C76A3AD7E234DA0F00D6F1405D8AFDF4D8DED
-46.4.24.161:9031 orport=9002 id=7460F3D12EBE861E4EE073F6233047AACFE46AB4
-46.38.51.132:9030 orport=9001 id=810DEFA7E90B6C6C383C063028EC397A71D7214A
-163.172.194.53:9030 orport=9001 id=8C00FA7369A7A308F6A137600F0FA07990D9D451 ipv6=[2001:bc8:225f:142:6c69:7461:7669:73]:9001
-
-# Email sent directly to teor, verified using relay contact info
-176.10.107.180:9030 orport=9001 id=3D7E274A87D9A89AF064C13D1EE4CA1F184F2600
-
-# Email sent directly to teor, verified using relay contact info
-46.28.207.19:80 orport=443 id=5B92FA5C8A49D46D235735504C72DBB3472BA321
-46.28.207.141:80 orport=443 id=F69BED36177ED727706512BA6A97755025EEA0FB
-46.28.205.170:80 orport=443 id=AF322D83A4D2048B22F7F1AF5F38AFF4D09D0B76
-95.183.48.12:80 orport=443 id=7187CED1A3871F837D0E60AC98F374AC541CB0DA
-
-# Email sent directly to teor, verified using relay contact info
-93.180.156.84:9030 orport=9001 id=8844D87E9B038BE3270938F05AF797E1D3C74C0F
-
-# Email sent directly to teor, verified using relay contact info
-37.187.115.157:9030 orport=9001 id=D5039E1EBFD96D9A3F9846BF99EC9F75EDDE902A
-
-# Email sent directly to teor, verified using relay contact info
-5.34.183.205:80 orport=443 id=DDD7871C1B7FA32CB55061E08869A236E61BDDF8
-
-# Email sent directly to teor, verified using relay contact info
-51.254.246.203:9030 orport=9001 id=47B596B81C9E6277B98623A84B7629798A16E8D5
-
-# Email sent directly to teor, verified using relay contact info
-5.9.146.203:80 orport=443 id=1F45542A24A61BF9408F1C05E0DCE4E29F2CBA11
-
-# Email sent directly to teor, verified using relay contact info
-# Updated details from atlas based on ticket #20010
-163.172.176.167:80 orport=443 id=230A8B2A8BA861210D9B4BA97745AEC217A94207
-163.172.149.155:80 orport=443 id=0B85617241252517E8ECF2CFC7F4C1A32DCD153F
-163.172.149.122:80 orport=443 id=A9406A006D6E7B5DA30F2C6D4E42A338B5E340B2
-
-# Email sent directly to teor, verified using relay contact info
-204.11.50.131:9030 orport=9001 id=185F2A57B0C4620582602761097D17DB81654F70
-
-# Email sent directly to teor, verified using relay contact info
-151.236.222.217:44607 orport=9001 id=94D58704C2589C130C9C39ED148BD8EA468DBA54
-
-# Email sent directly to teor, verified using relay contact info
-185.35.202.221:9030 orport=9001 id=C13B91384CDD52A871E3ECECE4EF74A7AC7DCB08 ipv6=[2a02:ed06::221]:9001
-
-# Email sent directly to teor, verified using relay contact info
-5.9.151.241:9030 orport=4223 id=9BF04559224F0F1C3C953D641F1744AF0192543A ipv6=[2a01:4f8:190:34f0::2]:4223
-
-# Email sent directly to teor, verified using relay contact info
-89.40.71.149:8081 orport=8080 id=EC639EDAA5121B47DBDF3D6B01A22E48A8CB6CC7
-
-# Email sent directly to teor, verified using relay contact info
-92.222.20.130:80 orport=443 id=0639612FF149AA19DF3BCEA147E5B8FED6F3C87C
-
-# Email sent directly to teor, verified using relay contact info
-80.112.155.100:9030 orport=9001 id=53B000310984CD86AF47E5F3CD0BFF184E34B383 ipv6=[2001:470:7b02::38]:9001
-
-# Email sent directly to teor, verified using relay contact info
-83.212.99.68:80 orport=443 id=DDBB2A38252ADDA53E4492DDF982CA6CC6E10EC0 ipv6=[2001:648:2ffc:1225:a800:bff:fe3d:67b5]:443
-
-# Email sent directly to teor, verified using relay contact info
-95.130.11.147:9030 orport=443 id=6B697F3FF04C26123466A5C0E5D1F8D91925967A
-
-# Email sent directly to teor, verified using relay contact info
-128.199.55.207:9030 orport=9001 id=BCEF908195805E03E92CCFE669C48738E556B9C5 ipv6=[2a03:b0c0:2:d0::158:3001]:9001
-
-# Email sent directly to teor, verified using relay contact info
-178.32.216.146:9030 orport=9001 id=17898F9A2EBC7D69DAF87C00A1BD2FABF3C9E1D2
-
-# Email sent directly to teor, verified using relay contact info
-212.83.40.238:9030 orport=9001 id=F409FA7902FD89270E8DE0D7977EA23BC38E5887
-
-# Email sent directly to teor, verified using relay contact info
-204.8.156.142:80 orport=443 id=94C4B7B8C50C86A92B6A20107539EE2678CF9A28
-
-# Email sent directly to teor, verified using relay contact info
-80.240.139.111:80 orport=443 id=DD3BE7382C221F31723C7B294310EF9282B9111B
-
-# Email sent directly to teor, verified using relay contact info
-185.97.32.18:9030 orport=9001 id=04250C3835019B26AA6764E85D836088BE441088
-
-# Email sent directly to teor
-149.56.45.200:9030 orport=9001 id=FE296180018833AF03A8EACD5894A614623D3F76 ipv6=[2607:5300:201:3000::17d3]:9002 # PiotrTorpotkinOne
-
-# Email sent directly to teor, verified using relay contact info
-81.2.209.10:443 orport=80 id=B6904ADD4C0D10CDA7179E051962350A69A63243 ipv6=[2001:15e8:201:1::d10a]:80
-
-# Email sent directly to teor, verified using relay contact info
-# IPv6 address unreliable
-195.154.164.243:80 orport=443 id=AC66FFA4AB35A59EBBF5BF4C70008BF24D8A7A5C #ipv6=[2001:bc8:399f:f000::1]:993
-138.201.26.2:80 orport=443 id=6D3A3ED5671E4E3F58D4951438B10AE552A5FA0F
-81.7.16.182:80 orport=443 id=51E1CF613FD6F9F11FE24743C91D6F9981807D82 ipv6=[2a02:180:1:1::517:10b6]:993
-134.119.36.135:80 orport=443 id=763C9556602BD6207771A7A3D958091D44C43228 ipv6=[2a00:1158:3::2a8]:993
-46.228.199.19:80 orport=443 id=E26AFC5F718E21AC502899B20C653AEFF688B0D2 ipv6=[2001:4ba0:cafe:4a::1]:993
-37.200.98.5:80 orport=443 id=231C2B9C8C31C295C472D031E06964834B745996 ipv6=[2a00:1158:3::11a]:993
-46.23.70.195:80 orport=443 id=C9933B3725239B6FAB5227BA33B30BE7B48BB485
-185.15.244.124:80 orport=443 id=935BABE2564F82016C19AEF63C0C40B5753BA3D2 ipv6=[2001:4ba0:cafe:e35::1]:993
-195.154.116.232:80 orport=443 id=B35C5739C8C5AB72094EB2B05738FD1F8EEF6EBD ipv6=[2001:bc8:399f:200::1]:993
-195.154.121.198:80 orport=443 id=0C77421C890D16B6D201283A2244F43DF5BC89DD ipv6=[2001:bc8:399f:100::1]:993
-37.187.20.59:80 orport=443 id=91D23D8A539B83D2FB56AA67ECD4D75CC093AC55 ipv6=[2001:41d0:a:143b::1]:993
-217.12.208.117:80 orport=443 id=E6E18151300F90C235D3809F90B31330737CEB43 ipv6=[2a00:1ca8:a7::1bb]:993
-81.7.10.251:80 orport=443 id=8073670F8F852971298F8AF2C5B23AE012645901 ipv6=[2a02:180:1:1::517:afb]:993
-46.36.39.50:80 orport=443 id=ED4B0DBA79AEF5521564FA0231455DCFDDE73BB6 ipv6=[2a02:25b0:aaaa:aaaa:8d49:b692:4852:0]:995
-91.194.90.103:80 orport=443 id=75C4495F4D80522CA6F6A3FB349F1B009563F4B7 ipv6=[2a02:c205:3000:5449::1]:993
-163.172.25.118:80 orport=22 id=0CF8F3E6590F45D50B70F2F7DA6605ECA6CD408F
-188.138.88.42:80 orport=443 id=70C55A114C0EF3DC5784A4FAEE64388434A3398F
-81.7.13.84:80 orport=443 id=0C1E7DD9ED0676C788933F68A9985ED853CA5812 ipv6=[2a02:180:1:1::5b8f:538c]:993
-213.246.56.95:80 orport=443 id=27E6E8E19C46751E7312420723C6162FF3356A4C ipv6=[2a00:c70:1:213:246:56:95:1]:993
-94.198.100.18:80 orport=443 id=BAACCB29197DB833F107E410E2BFAE5009EE7583
-217.12.203.46:80 orport=443 id=6A29FD8C00D573E6C1D47852345B0E5275BA3307
-212.117.180.107:80 orport=443 id=0B454C7EBA58657B91133A587C1BDAEDC6E23142
-217.12.199.190:80 orport=443 id=A37C47B03FF31CA6937D3D68366B157997FE7BCD ipv6=[2a02:27a8:0:2::486]:993
-216.230.230.247:80 orport=443 id=4C7BF55B1BFF47993DFF995A2926C89C81E4F04A
-69.30.215.42:80 orport=443 id=510176C07005D47B23E6796F02C93241A29AA0E9 ipv6=[2604:4300:a:2e::2]:993
-89.46.100.162:80 orport=443 id=6B7191639E179965FD694612C9B2C8FB4267B27D
-107.181.174.22:80 orport=443 id=5A551BF2E46BF26CC50A983F7435CB749C752553 ipv6=[2607:f7a0:3:4::4e]:993
-
-# Email sent directly to teor, verified using relay contact info
-212.238.208.48:9030 orport=9001 id=F406219CDD339026D160E53FCA0EF6857C70F109 ipv6=[2001:984:a8fb:1:ba27:ebff:feac:c109]:9001
-
-# Email sent directly to teor
-176.158.236.102:9030 orport=9001 id=DC163DDEF4B6F0C6BC226F9F6656A5A30C5C5686 # Underworld
-
-# Email sent directly to teor, verified using relay contact info
-91.229.20.27:9030 orport=9001 id=9A0D54D3A6D2E0767596BF1515E6162A75B3293F
-
-# Email sent directly to teor, verified using relay contact info
-80.127.137.19:80 orport=443 id=6EF897645B79B6CB35E853B32506375014DE3621 ipv6=[2001:981:47c1:1::6]:443
-
-# Email sent directly to teor
-163.172.138.22:80 orport=443 id=16102E458460349EE45C0901DAA6C30094A9BBEA ipv6=[2001:bc8:4400:2100::1:3]:443 # mkultra
-
-# Email sent directly to teor, verified using relay contact info
-97.74.237.196:9030 orport=9001 id=2F0F32AB1E5B943CA7D062C03F18960C86E70D94
-
-# Email sent directly to teor, verified using relay contact info
-192.187.124.98:9030 orport=9001 id=FD1871854BFC06D7B02F10742073069F0528B5CC
-
-# Email sent directly to teor, verified using relay contact info
-178.62.98.160:9030 orport=9001 id=8B92044763E880996A988831B15B2B0E5AD1544A
-
-# Email sent directly to teor, verified using relay contact info
-163.172.217.50:9030 orport=9001 id=02ECD99ECD596013A8134D46531560816ECC4BE6
-
-# Email sent directly to teor, verified using relay contact info
-185.100.86.100:80 orport=443 id=0E8C0C8315B66DB5F703804B3889A1DD66C67CE0
-185.100.84.82:80 orport=443 id=7D05A38E39FC5D29AFE6BE487B9B4DC9E635D09E
-
-# Email sent directly to teor, verified using relay contact info
-78.24.75.53:9030 orport=9001 id=DEB73705B2929AE9BE87091607388939332EF123
-
-# Email sent directly to teor, verified using relay contact info
-46.101.237.246:9030 orport=9001 id=75F1992FD3F403E9C082A5815EB5D12934CDF46C ipv6=[2a03:b0c0:3:d0::208:5001]:9050
-178.62.86.96:9030 orport=9001 id=439D0447772CB107B886F7782DBC201FA26B92D1 ipv6=[2a03:b0c0:1:d0::3cf:7001]:9050
-
-# Email sent directly to teor, verified using relay contact info
-# Very low bandwidth, stale consensues, excluded to cut down on warnings
-#91.233.106.121:80 orport=443 id=896364B7996F5DFBA0E15D1A2E06D0B98B555DD6
-
-# Email sent directly to teor, verified using relay contact info
-167.114.113.48:9030 orport=403 id=2EC0C66EA700C44670444280AABAB1EC78B722A0
-
-# Email sent directly to teor, verified using relay contact info
-# Assume details update is permanent
-213.141.138.174:9030 orport=9001 id=BD552C165E2ED2887D3F1CCE9CFF155DDA2D86E6 # Schakalium
-
-# Email sent directly to teor, verified using relay contact info
-95.128.43.164:80 orport=443 id=616081EC829593AF4232550DE6FFAA1D75B37A90 ipv6=[2a02:ec0:209:10::4]:443
-
-# Email sent directly to teor, verified using relay contact info
-166.82.21.200:9030 orport=9029 id=D5C33F3E203728EDF8361EA868B2939CCC43FAFB
-
-# Email sent directly to teor, verified using relay contact info
-91.121.54.8:9030 orport=9001 id=CBEE0F3303C8C50462A12107CA2AE061831931BC
-
-# Email sent directly to teor, verified using relay contact info
-178.217.184.32:8080 orport=443 id=8B7F47AE1A5D954A3E58ACDE0865D09DBA5B738D
-
-# Email sent directly to teor, verified using relay contact info
-85.10.201.47:9030 orport=9001 id=D8B7A3A6542AA54D0946B9DC0257C53B6C376679 ipv6=[2a01:4f8:a0:43eb::beef]:9001
-
-# Email sent directly to teor, verified using relay contact info
-120.29.217.46:80 orport=443 id=5E853C94AB1F655E9C908924370A0A6707508C62
-
-# Email sent directly to teor, verified using relay contact info
-37.153.1.10:9030 orport=9001 id=9772EFB535397C942C3AB8804FB35CFFAD012438
-
-# Email sent directly to teor, verified using relay contact info
-92.222.4.102:9030 orport=9001 id=1A6B8B8272632D8AD38442027F822A367128405C
-
-# Email sent directly to teor, verified using relay contact info
-31.31.78.49:80 orport=443 id=46791D156C9B6C255C2665D4D8393EC7DBAA7798
-
-# Email sent directly to teor
-192.160.102.169:80 orport=9001 id=C0192FF43E777250084175F4E59AC1BA2290CE38 ipv6=[2620:132:300c:c01d::9]:9002 # manipogo
-192.160.102.166:80 orport=9001 id=547DA56F6B88B6C596B3E3086803CDA4F0EF8F21 ipv6=[2620:132:300c:c01d::6]:9002 # chaucer
-192.160.102.170:80 orport=9001 id=557ACEC850F54EEE65839F83CACE2B0825BE811E ipv6=[2620:132:300c:c01d::a]:9002 # ogopogo
-192.160.102.164:80 orport=9001 id=823AA81E277F366505545522CEDC2F529CE4DC3F ipv6=[2620:132:300c:c01d::4]:9002 # snowfall
-192.160.102.165:80 orport=9001 id=C90CA3B7FE01A146B8268D56977DC4A2C024B9EA ipv6=[2620:132:300c:c01d::5]:9002 # cowcat
-192.160.102.168:80 orport=9001 id=F6A358DD367B3282D6EF5824C9D45E1A19C7E815 ipv6=[2620:132:300c:c01d::8]:9002 # prawksi
-
-# Email sent directly to teor, verified using relay contact info
-136.243.214.137:80 orport=443 id=B291D30517D23299AD7CEE3E60DFE60D0E3A4664
-
-# Email sent directly to teor, verified using relay contact info
-192.87.28.28:9030 orport=9001 id=ED2338CAC2711B3E331392E1ED2831219B794024
-192.87.28.82:9030 orport=9001 id=844AE9CAD04325E955E2BE1521563B79FE7094B7
-
-# Email sent directly to teor, verified using relay contact info
-192.87.28.28:9030 orport=9001 id=ED2338CAC2711B3E331392E1ED2831219B794024
-# same machine as ED2338CAC2711B3E331392E1ED2831219B794024
-192.87.28.82:9030 orport=9001 id=844AE9CAD04325E955E2BE1521563B79FE7094B7
-
-# https://twitter.com/kosjoli/status/719507270904758272
-85.10.202.87:9030 orport=9001 id=971AFB23C168DCD8EDA17473C1C452B359DE3A5A
-176.9.5.116:9030 orport=9001 id=A1EB8D8F1EE28DB98BBB1EAA3B4BEDD303BAB911
-46.4.111.124:9030 orport=9001 id=D9065F9E57899B3D272AA212317AF61A9B14D204
-
-# Email sent directly to teor, verified using relay contact info
-185.100.85.61:80 orport=443 id=025B66CEBC070FCB0519D206CF0CF4965C20C96E
-
-# Email sent directly to teor, verified using relay contact info
-108.166.168.158:80 orport=443 id=CDAB3AE06A8C9C6BF817B3B0F1877A4B91465699
-
-# Email sent directly to teor, verified using relay contact info
-91.219.236.222:80 orport=443 id=20704E7DD51501DC303FA51B738D7B7E61397CF6
-
-# Email sent directly to teor, verified using relay contact info
-185.14.185.240:9030 orport=443 id=D62FB817B0288085FAC38A6DC8B36DCD85B70260
-192.34.63.137:9030 orport=443 id=ABCB4965F1FEE193602B50A365425105C889D3F8
-128.199.197.16:9030 orport=443 id=DEE5298B3BA18CDE651421CD2DCB34A4A69F224D
-
-# Email sent directly to teor, verified using relay contact info
-185.13.38.75:9030 orport=9001 id=D2A1703758A0FBBA026988B92C2F88BAB59F9361
-
-# Email sent directly to teor, verified using relay contact info
-128.204.39.106:9030 orport=9001 id=6F0F3C09AF9580F7606B34A7678238B3AF7A57B7
-
-# Email sent directly to teor, verified using relay contact info
-198.50.191.95:80 orport=443 id=39F096961ED2576975C866D450373A9913AFDC92
-
-# Email sent directly to teor, verified using relay contact info
-167.114.66.61:9696 orport=443 id=DE6CD5F09DF26076F26321B0BDFBE78ACD935C65 ipv6=[2607:5300:100::78d]:443
-
-# Email sent directly to teor, verified using relay contact info
-66.111.2.20:9030 orport=9001 id=9A68B85A02318F4E7E87F2828039FBD5D75B0142
-66.111.2.16:9030 orport=9001 id=3F092986E9B87D3FDA09B71FA3A602378285C77A
-
-# Email sent directly to teor, verified using relay contact info
-92.222.38.67:80 orport=443 id=DED6892FF89DBD737BA689698A171B2392EB3E82
-
-# Email sent directly to teor, verified using relay contact info
-212.47.228.115:9030 orport=443 id=BCA017ACDA48330D02BB70716639ED565493E36E
-
-# Email sent directly to teor, verified using relay contact info
-185.100.84.175:80 orport=443 id=39B59AF4FE54FAD8C5085FA9C15FDF23087250DB
-
-# Email sent directly to teor, verified using relay contact info
-166.70.207.2:9030 orport=9001 id=E3DB2E354B883B59E8DC56B3E7A353DDFD457812
-
-# Emails sent directly to teor, verified using relay contact info
-69.162.139.9:9030 orport=9001 id=4791FC0692EAB60DF2BCCAFF940B95B74E7654F6 ipv6=[2607:f128:40:1212::45a2:8b09]:9001
-
-# Email sent directly to teor, verified using relay contact info
-213.239.217.18:1338 orport=1337 id=C37BC191AC389179674578C3E6944E925FE186C2 ipv6=[2a01:4f8:a0:746a:101:1:1:1]:1337
-
-# Email sent directly to teor, verified using relay contact info
-# Assume details update is permanent
-188.40.128.246:9030 orport=9001 id=AD19490C7DBB26D3A68EFC824F67E69B0A96E601 ipv6=[2a01:4f8:221:1ac1:dead:beef:7005:9001]:9001 # sputnik
-129.13.131.140:80 orport=443 id=F2DFE5FA1E4CF54F8E761A6D304B9B4EC69BDAE8 ipv6=[2a00:1398:5:f604:cafe:cafe:cafe:9001]:443 # AlleKochenKaffee
-
-# Email sent directly to teor, verified using relay contact info
-88.198.253.13:9030 orport=9001 id=DF924196D69AAE3C00C115A9CCDF7BB62A175310 ipv6=[2a01:4f8:11a:b1f::2]:9001
-
-# Email sent directly to teor, verified using relay contact info
-185.100.86.128:9030 orport=9001 id=9B31F1F1C1554F9FFB3455911F82E818EF7C7883
-46.36.36.127:9030 orport=9001 id=C80DF89B21FF932DEC0D7821F679B6C79E1449C3
-
-# Email sent directly to teor, verified using relay contact info
-176.10.104.240:80 orport=443 id=0111BA9B604669E636FFD5B503F382A4B7AD6E80
-176.10.104.240:8080 orport=8443 id=AD86CD1A49573D52A7B6F4A35750F161AAD89C88
-176.10.104.243:8080 orport=8443 id=95DA61AEF23A6C851028C1AA88AD8593F659E60F
-94.230.208.147:80 orport=443 id=9AA3FF35E7A549D2337E962333D366E102FE4D50 ipv6=[2a02:418:6017::147]:443
-
-# Email sent directly to teor, verified using relay contact info
-107.170.101.39:9030 orport=443 id=30973217E70AF00EBE51797FF6D9AA720A902EAA
-
-# Email sent directly to teor
-193.70.112.165:80 orport=443 id=F10BDE279AE71515DDCCCC61DC19AC8765F8A3CC # ParkBenchInd001
-
-# Email sent directly to teor
-185.220.101.6:10006 orport=20006 id=C08DE49658E5B3CFC6F2A952B453C4B608C9A16A # niftyvolcanorabbit
-185.220.101.13:10013 orport=20013 id=71AB4726D830FAE776D74AEF790CF04D8E0151B4 # niftycottontail
-185.220.101.5:10005 orport=20005 id=1084200B44021D308EA4253F256794671B1D099A # niftyhedgehog
-185.220.101.9:10009 orport=20009 id=14877C6384A9E793F422C8D1DDA447CACA4F7C4B # niftywoodmouse
-185.220.101.8:10008 orport=20008 id=24E91955D969AEA1D80413C64FE106FAE7FD2EA9 # niftymouse
-185.220.101.1:10001 orport=20001 id=28F4F392F8F19E3FBDE09616D9DB8143A1E2DDD3 # niftycottonmouse
-185.220.101.21:10021 orport=20021 id=348B89013EDDD99E4755951D1EC284D9FED71226 # niftysquirrel
-185.220.101.10:10010 orport=20010 id=4031460683AE9E0512D3620C2758D98758AC6C93 # niftyeuropeanrabbit
-185.220.101.34:10034 orport=20034 id=47C42E2094EE482E7C9B586B10BABFB67557030B # niftyquokka
-185.220.101.18:10018 orport=20018 id=5D5006E4992F2F97DF4F8B926C3688870EB52BD8 # niftyplagiodontia
-185.220.101.28:10028 orport=20028 id=609E598FB6A00BCF7872906B602B705B64541C50 # niftychipmunk
-185.220.101.20:10020 orport=20020 id=619349D82424C601CAEB94161A4CF778993DAEE7 # niftytucotuco
-185.220.101.17:10017 orport=20017 id=644DECC5A1879C0FE23DE927DD7049F58BBDF349 # niftyhutia
-185.220.101.0:10000 orport=20000 id=6E94866ED8CA098BACDFD36D4E8E2B459B8A734E # niftybeaver
-185.220.101.30:10030 orport=20030 id=71CFDEB4D9E00CCC3E31EC4E8A29E109BBC1FB36 # niftypedetidae
-185.220.101.29:10029 orport=20029 id=7DC52AE6667A30536BA2383CD102CFC24F20AD71 # niftyllipika
-185.220.101.41:10041 orport=20041 id=7E281CD2C315C4F7A84BC7C8721C3BC974DDBFA3 # niftyporcupine
-185.220.101.25:10025 orport=20025 id=8EE0534532EA31AA5172B1892F53B2F25C76EB02 # niftyjerboa
-185.220.101.33:10033 orport=20033 id=906DCB390F2BA987AE258D745E60BAAABAD31DE8 # niftyquokka
-185.220.101.26:10026 orport=20026 id=92A6085EABAADD928B6F8E871540A1A41CBC08BA # niftypedetes
-185.220.101.40:10040 orport=20040 id=9A857254F379194D1CD76F4A79A20D2051BEDA3F # niftynutria
-185.220.101.42:10042 orport=20042 id=9B816A5B3EB20B8E4E9B9D1FBA299BD3F40F0320 # niftypygmyjerboa
-185.220.101.2:10002 orport=20002 id=B740BCECC4A9569232CDD45C0E1330BA0D030D33 # niftybunny
-185.220.101.32:10032 orport=20032 id=B771AA877687F88E6F1CA5354756DF6C8A7B6B24 # niftypika
-185.220.101.12:10012 orport=20012 id=BC82F2190DE2E97DE65F49B4A95572374BDC0789 # niftycapybara
-185.220.101.22:10022 orport=20022 id=CA37CD46799449D83B6B98B8C22C649906307888 # niftyjackrabbit
-185.220.101.4:10004 orport=20004 id=CDA2EA326E2272C57ACB26773D7252C211795B78 # niftygerbil
-185.220.101.14:10014 orport=20014 id=E7EBA5D8A4E09684D11A1DF24F75362817333768 # niftyhare
-185.220.101.16:10016 orport=20016 id=EC1997D51892E4607C68E800549A1E7E4694005A # niftyguineapig
-185.220.101.24:10024 orport=20024 id=FDA70EC93DB01E3CB418CB6943B0C68464B18B4C # niftyrat
-
-# Email sent directly to teor, verified using relay contact info
-198.232.165.2:9030 orport=9001 id=30C19B81981F450C402306E2E7CFB6C3F79CB6B2
-
-# Emails sent directly to teor, verified using relay contact info
-51.254.101.242:9002 orport=9001 id=4CC9CC9195EC38645B699A33307058624F660CCF
-
-# Emails sent directly to teor, verified using relay contact info
-# Updated IP https://trac.torproject.org/projects/tor/ticket/24805#comment:16
-94.130.186.5:80 orport=443 id=6A7551EEE18F78A9813096E82BF84F740D32B911
-
-# Email sent directly to teor, verified using relay contact info
-173.255.245.116:9030 orport=9001 id=91E4015E1F82DAF0121D62267E54A1F661AB6DC7
-
-# Email sent directly to teor, verified using relay contact info
-62.216.5.120:9030 orport=9001 id=D032D4D617140D6B828FC7C4334860E45E414FBE
-
-# Email sent directly to teor, verified using relay contact info
-51.254.136.195:80 orport=443 id=7BB70F8585DFC27E75D692970C0EEB0F22983A63
-
-# Email sent directly to teor, verified using relay contact info
-5.196.88.122:9030 orport=9001 id=0C2C599AFCB26F5CFC2C7592435924C1D63D9484 ipv6=[2001:41d0:a:fb7a::1]:9001
-
-# Email sent directly to teor, verified using relay contact info
-5.9.158.75:80 orport=443 id=1AF72E8906E6C49481A791A6F8F84F8DFEBBB2BA ipv6=[2a01:4f8:190:514a::2]:443
-5.9.158.75:9030 orport=9001 id=D11D11877769B9E617537B4B46BFB92B443DE33D ipv6=[2a01:4f8:190:514a::2]:9001
-
-# Email sent directly to teor, verified using relay contact info
-46.101.169.151:9030 orport=9001 id=D760C5B436E42F93D77EF2D969157EEA14F9B39C ipv6=[2a03:b0c0:3:d0::74f:a001]:9001
-
-# Email sent directly to teor, verified using relay contact info
-199.249.223.81:80 orport=443 id=F7447E99EB5CBD4D5EB913EE0E35AC642B5C1EF3
-199.249.223.79:80 orport=443 id=D33292FEDE24DD40F2385283E55C87F85C0943B6
-199.249.223.78:80 orport=443 id=EC15DB62D9101481F364DE52EB8313C838BDDC29
-199.249.223.77:80 orport=443 id=CC4A3AE960E3617F49BF9887B79186C14CBA6813
-199.249.223.76:80 orport=443 id=43209F6D50C657A56FE79AF01CA69F9EF19BD338
-199.249.223.75:80 orport=443 id=60D3667F56AEC5C69CF7E8F557DB21DDF6C36060
-199.249.223.74:80 orport=443 id=5F4CD12099AF20FAF9ADFDCEC65316A376D0201C
-199.249.223.73:80 orport=443 id=5649CB2158DA94FB747415F26628BEC07FA57616
-199.249.223.72:80 orport=443 id=B028707969D8ED84E6DEA597A884F78AAD471971
-199.249.223.71:80 orport=443 id=B6320E44A230302C7BF9319E67597A9B87882241
-199.249.223.60:80 orport=443 id=B7047FBDE9C53C39011CA84E5CB2A8E3543066D0
-199.249.223.61:80 orport=443 id=40E7D6CE5085E4CDDA31D51A29D1457EB53F12AD
-199.249.223.62:80 orport=443 id=0077BCBA7244DB3E6A5ED2746E86170066684887
-199.249.223.63:80 orport=443 id=1DB25DF59DAA01B5BE3D3CEB8AFED115940EBE8B
-199.249.223.64:80 orport=443 id=9F2856F6D2B89AD4EF6D5723FAB167DB5A53519A
-199.249.223.65:80 orport=443 id=9D21F034C3BFF4E7737D08CF775DC1745706801F
-199.249.223.66:80 orport=443 id=C5A53BCC174EF8FD0DCB223E4AA929FA557DEDB2
-199.249.223.67:80 orport=443 id=155D6F57425F16C0624D77777641E4EB1B47C6F0
-199.249.223.68:80 orport=443 id=DF20497E487A979995D851A5BCEC313DF7E5BC51
-199.249.223.69:80 orport=443 id=7FA8E7E44F1392A4E40FFC3B69DB3B00091B7FD3
-
-# https://lists.torproject.org/pipermail/tor-relays/2016-December/011114.html
-86.105.212.130:9030 orport=443 id=9C900A7F6F5DD034CFFD192DAEC9CCAA813DB022
-
-# Email sent directly to teor, verified using relay contact info
-178.33.183.251:80 orport=443 id=DD823AFB415380A802DCAEB9461AE637604107FB ipv6=[2001:41d0:2:a683::251]:443
-
-# Email sent directly to teor, verified using relay contact info
-31.185.104.19:80 orport=443 id=9EAD5B2D3DBD96DBC80DCE423B0C345E920A758D
-# same machine as 9EAD5B2D3DBD96DBC80DCE423B0C345E920A758D
-31.185.104.20:80 orport=443 id=ADB2C26629643DBB9F8FE0096E7D16F9414B4F8D
-31.185.104.21:80 orport=443 id=C2AAB088555850FC434E68943F551072042B85F1
-31.185.104.22:80 orport=443 id=5BA3A52760A0EABF7E7C3ED3048A77328FF0F148
-
-# Email sent directly to teor, verified using relay contact info
-185.34.60.114:80 orport=443 id=7F7A695DF6F2B8640A70B6ADD01105BC2EBC5135
-
-# https://lists.torproject.org/pipermail/tor-relays/2017-December/013939.html
-94.142.242.84:80 orport=443 id=AA0D167E03E298F9A8CD50F448B81FBD7FA80D56 ipv6=[2a02:898:24:84::1]:443 # rejozenger
-
-# Email sent directly to teor, verified using relay contact info
-185.129.62.62:9030 orport=9001 id=ACDD9E85A05B127BA010466C13C8C47212E8A38F ipv6=[2a06:d380:0:3700::62]:9001
-
-# Email sent directly to teor, verified using relay contact info
-# The e84 part of the IPv6 address does not have a leading 0 in the consensus
-81.30.158.213:9030 orport=9001 id=789EA6C9AE9ADDD8760903171CFA9AC5741B0C70 ipv6=[2001:4ba0:cafe:e84::1]:9001
-
-# https://lists.torproject.org/pipermail/tor-relays/2016-December/011209.html
-5.9.159.14:9030 orport=9001 id=0F100F60C7A63BED90216052324D29B08CFCF797
-
-# Email sent directly to teor, verified using relay contact info
-45.62.255.25:80 orport=443 id=3473ED788D9E63361D1572B7E82EC54338953D2A
-
-# Email sent directly to teor, verified using relay contact info
-217.79.179.177:9030 orport=9001 id=3E53D3979DB07EFD736661C934A1DED14127B684 ipv6=[2001:4ba0:fff9:131:6c4f::90d3]:9001
-
-# Email sent directly to teor, verified using relay contact info
-212.47.244.38:8080 orport=443 id=E81EF60A73B3809F8964F73766B01BAA0A171E20
-163.172.157.213:8080 orport=443 id=4623A9EC53BFD83155929E56D6F7B55B5E718C24
-163.172.139.104:8080 orport=443 id=68F175CCABE727AA2D2309BCD8789499CEE36ED7
-
-# Email sent directly to teor, verified using relay contact info
-163.172.223.200:80 orport=443 id=998BF3ED7F70E33D1C307247B9626D9E7573C438
-195.154.122.54:80 orport=443 id=64E99CB34C595A02A3165484BD1215E7389322C6
-
-# Email sent directly to teor, verified using relay contact info
-# Email sent directly to Phoul
-185.100.86.128:9030 orport=9001 id=9B31F1F1C1554F9FFB3455911F82E818EF7C7883
-185.100.85.101:9030 orport=9001 id=4061C553CA88021B8302F0814365070AAE617270
-
-# Email sent directly to teor, verified using relay contact info
-89.163.247.43:9030 orport=9001 id=BC7ACFAC04854C77167C7D66B7E471314ED8C410 ipv6=[2001:4ba0:fff7:25::5]:9001
-
-# Email sent directly to teor, verified using relay contact info
-95.85.8.226:80 orport=443 id=1211AC1BBB8A1AF7CBA86BCE8689AA3146B86423
-
-# Email sent directly to teor, verified using relay contact info
-85.214.151.72:9030 orport=9001 id=722D365140C8C52DBB3C9FF6986E3CEFFE2BA812
-
-# email sent directly to teor
-72.52.75.27:9030 orport=9001 id=8567AD0A6369ED08527A8A8533A5162AC00F7678 # piecoopdotnet
-
-# Email sent directly to teor, verified using relay contact info
-5.9.146.203:80 orport=443 id=1F45542A24A61BF9408F1C05E0DCE4E29F2CBA11
-5.9.159.14:9030 orport=9001 id=0F100F60C7A63BED90216052324D29B08CFCF797
-
-# Email sent directly to teor, verified using relay contact info
-# Assume details update is permanent
-5.9.147.226:9030 orport=9001 id=B0553175AADB0501E5A61FC61CEA3970BE130FF2 ipv6=[2a01:4f8:190:30e1::2]:9001 # zwiubel
-
-# https://trac.torproject.org/projects/tor/ticket/22527#comment:1
-199.184.246.250:80 orport=443 id=1F6ABD086F40B890A33C93CC4606EE68B31C9556 ipv6=[2620:124:1009:1::171]:443
-
-# https://trac.torproject.org/projects/tor/ticket/24695
-163.172.53.84:143 orport=21 id=1C90D3AEADFF3BCD079810632C8B85637924A58E ipv6=[2001:bc8:24f8::]:21 # Multivac
-
-# Email sent directly to teor
-54.36.237.163:80 orport=443 id=DB2682153AC0CCAECD2BD1E9EBE99C6815807A1E # GermanCraft2
-
-# Email sent directly to teor
-62.138.7.171:9030 orport=9001 id=9844B981A80B3E4B50897098E2D65167E6AEF127 # 0x3d004
-62.138.7.171:8030 orport=8001 id=9285B22F7953D7874604EEE2B470609AD81C74E9 # 0x3d005
-91.121.23.100:9030 orport=9001 id=3711E80B5B04494C971FB0459D4209AB7F2EA799 # 0x3d002
-91.121.23.100:8030 orport=8001 id=CFBBA0D858F02E40B1432A65F6D13C9BDFE7A46B # 0x3d001
-51.15.13.245:9030 orport=9001 id=CED527EAC230E7B56E5B363F839671829C3BA01B # 0x3d006
-51.15.13.245:8030 orport=8001 id=8EBB8D1CF48FE2AB95C451DA8F10DB6235F40F8A # 0x3d007
-
-# Email sent directly to teor
-104.192.5.248:9030 orport=9001 id=BF735F669481EE1CCC348F0731551C933D1E2278 # Freeway11
-
-# Email sent directly to teor
-# https://lists.torproject.org/pipermail/tor-relays/2017-December/013961.html
-178.17.174.14:9030 orport=9001 id=B06F093A3D4DFAD3E923F4F28A74901BD4F74EB1 # TorExitMoldova
-178.17.170.156:9030 orport=9001 id=41C59606AFE1D1AA6EC6EF6719690B856F0B6587 # TorExitMoldova2
-
-# Email sent directly to teor
-163.172.221.44:59030 orport=59001 id=164604F5C86FC8CC9C0288BD9C02311958427597 # altego
-
-# Email sent directly to teor
-46.38.237.221:9030 orport=9001 id=D30E9D4D639068611D6D96861C95C2099140B805 # mine
-
-# https://lists.torproject.org/pipermail/tor-relays/2017-December/013911.html
-# https://lists.torproject.org/pipermail/tor-relays/2017-December/013912.html
-199.249.223.62:80 orport=443 id=0077BCBA7244DB3E6A5ED2746E86170066684887 # Quintex13
-199.249.224.45:80 orport=443 id=041646640AB306EA74B001966E86169B04CC88D2 # QuintexAirVPN26
-199.249.223.67:80 orport=443 id=155D6F57425F16C0624D77777641E4EB1B47C6F0 # Quintex18
-199.249.223.45:80 orport=443 id=1AE949967F82BBE7534A3D6BA77A7EBE1CED4369 # Quintex36
-199.249.223.63:80 orport=443 id=1DB25DF59DAA01B5BE3D3CEB8AFED115940EBE8B # Quintex14
-199.249.224.63:80 orport=443 id=1E5136DDC52FAE1219208F0A6BADB0BA62587EE6 # Quintex43
-199.249.224.46:80 orport=443 id=2ED4D25766973713EB8C56A290BF07E06B85BF12 # QuintexAirVPN27
-199.249.223.42:80 orport=443 id=3687FEC7E73F61AC66F7AE251E7DEE6BBD8C0252 # Quintex33
-199.249.223.49:80 orport=443 id=36D68478366CB8627866757EBCE7FB3C17FC1CB8 # Quintex40
-199.249.224.49:80 orport=443 id=3CA0D15567024D2E0B557DC0CF3E962B37999A79 # QuintexAirVPN30
-199.249.223.61:80 orport=443 id=40E7D6CE5085E4CDDA31D51A29D1457EB53F12AD # Quintex12
-199.249.223.76:80 orport=443 id=43209F6D50C657A56FE79AF01CA69F9EF19BD338 # QuintexAirVPN5
-199.249.224.41:80 orport=443 id=54A4820B46E65509BF3E2B892E66930A41759DE9 # QuintexAirVPN22
-199.249.223.73:80 orport=443 id=5649CB2158DA94FB747415F26628BEC07FA57616 # QuintexAirVPN8
-199.249.223.74:80 orport=443 id=5F4CD12099AF20FAF9ADFDCEC65316A376D0201C # QuintexAirVPN7
-199.249.223.75:80 orport=443 id=60D3667F56AEC5C69CF7E8F557DB21DDF6C36060 # QuintexAirVPN6
-199.249.223.46:80 orport=443 id=66E19E8C4773086F669A1E06A3F8C23B6C079129 # Quintex37
-199.249.224.65:80 orport=443 id=764BF8A03868F84C8F323C1A676AA254B80DC3BF # Quintex45
-199.249.223.48:80 orport=443 id=7A3DD280EA4CD4DD16EF8C67B93D9BDE184D1A81 # Quintex39
-199.249.224.68:80 orport=443 id=7E6E9A6FDDB8DC7C92F0CFCC3CBE76C29F061799 # Quintex48
-199.249.223.69:80 orport=443 id=7FA8E7E44F1392A4E40FFC3B69DB3B00091B7FD3 # Quintex20
-199.249.223.44:80 orport=443 id=8B80169BEF71450FC4069A190853523B7AEA45E1 # Quintex35
-199.249.224.60:80 orport=443 id=9314BD9503B9014261A65C221D77E57389DBCCC1 # Quintex50
-199.249.224.40:80 orport=443 id=9C1E7D92115D431385B8CAEA6A7C15FB89CE236B # QuintexAirVPN21
-199.249.223.65:80 orport=443 id=9D21F034C3BFF4E7737D08CF775DC1745706801F # Quintex16
-199.249.224.67:80 orport=443 id=9E2D7C6981269404AA1970B53891701A20424EF8 # Quintex47
-199.249.223.64:80 orport=443 id=9F2856F6D2B89AD4EF6D5723FAB167DB5A53519A # Quintex15
-199.249.224.48:80 orport=443 id=A0DB820FEC87C0405F7BF05DEE5E4ADED2BB9904 # QuintexAirVPN29
-199.249.224.64:80 orport=443 id=A4A393FEF48640961AACE92D041934B55348CEF9 # Quintex44
-199.249.223.72:80 orport=443 id=B028707969D8ED84E6DEA597A884F78AAD471971 # QuintexAirVPN9
-199.249.223.40:80 orport=443 id=B0CD9F9B5B60651ADC5919C0F1EAA87DBA1D9249 # Quintex31
-199.249.224.61:80 orport=443 id=B2197C23A4FF5D1C49EE45BA7688BA8BCCD89A0B # Quintex41
-199.249.223.71:80 orport=443 id=B6320E44A230302C7BF9319E67597A9B87882241 # QuintexAirVPN10
-199.249.223.60:80 orport=443 id=B7047FBDE9C53C39011CA84E5CB2A8E3543066D0 # Quintex11
-199.249.224.66:80 orport=443 id=C78AFFEEE320EA0F860961763E613FD2FAC855F5 # Quintex46
-199.249.224.44:80 orport=443 id=CB7C0D841FE376EF43F7845FF201B0290C0A239E # QuintexAirVPN25
-199.249.223.47:80 orport=443 id=CC14C97F1D23EE97766828FC8ED8582E21E11665 # Quintex38
-199.249.223.77:80 orport=443 id=CC4A3AE960E3617F49BF9887B79186C14CBA6813 # QuintexAirVPN4
-199.249.223.41:80 orport=443 id=D25210CE07C49F2A4F2BC7A506EB0F5EA7F5E2C2 # Quintex32
-199.249.223.79:80 orport=443 id=D33292FEDE24DD40F2385283E55C87F85C0943B6 # QuintexAirVPN2
-199.249.224.47:80 orport=443 id=D6FF2697CEA5C0C7DA84797C2E71163814FC2466 # QuintexAirVPN28
-199.249.223.68:80 orport=443 id=DF20497E487A979995D851A5BCEC313DF7E5BC51 # Quintex19
-199.249.223.43:80 orport=443 id=E480D577F58E782A5BC4FA6F49A6650E9389302F # Quintex34
-199.249.224.69:80 orport=443 id=EABC2DD0D47B5DB11F2D37EB3C60C2A4D91C10F2 # Quintex49
-199.249.223.78:80 orport=443 id=EC15DB62D9101481F364DE52EB8313C838BDDC29 # QuintexAirVPN3
-199.249.224.42:80 orport=443 id=F21DE9C7DE31601D9716781E17E24380887883D1 # QuintexAirVPN23
-199.249.223.81:80 orport=443 id=F7447E99EB5CBD4D5EB913EE0E35AC642B5C1EF3 # QuintexAirVPN1
-199.249.224.43:80 orport=443 id=FDD700C791CC6BB0AC1C2099A82CBC367AD4B764 # QuintexAirVPN24
-199.249.224.62:80 orport=443 id=FE00A3A835680E67FBBC895A724E2657BB253E97 # Quintex42
-199.249.223.66:80 orport=443 id=C5A53BCC174EF8FD0DCB223E4AA929FA557DEDB2 # Quintex17
-
-# https://lists.torproject.org/pipermail/tor-relays/2017-December/013914.html
-# https://lists.torproject.org/pipermail/tor-relays/2018-January/014063.html
-5.196.23.64:9030 orport=9001 id=775B0FAFDE71AADC23FFC8782B7BEB1D5A92733E # Aerodynamik01
-217.182.75.181:9030 orport=9001 id=EFEACD781604EB80FBC025EDEDEA2D523AEAAA2F # Aerodynamik02
-193.70.43.76:9030 orport=9001 id=484A10BA2B8D48A5F0216674C8DD50EF27BC32F3 # Aerodynamik03
-149.56.141.138:9030 orport=9001 id=1938EBACBB1A7BFA888D9623C90061130E63BB3F # Aerodynamik04
-54.37.73.111:9030 orport=9001 id=92412EA1B9AA887D462B51D816777002F4D58907 # Aerodynamik05
-54.37.17.235:9030 orport=9001 id=360CBA08D1E24F513162047BDB54A1015E531534 # Aerodynamik06
-
-# https://lists.torproject.org/pipermail/tor-relays/2017-December/013917.html
-104.200.20.46:80 orport=9001 id=78E2BE744A53631B4AAB781468E94C52AB73968B # bynumlawtor
-
-# https://lists.torproject.org/pipermail/tor-relays/2017-December/013929.html
-139.99.130.178:80 orport=443 id=867B95CACD64653FEEC4D2CEFC5C49B4620307A7 # coffswifi2
-
-# https://lists.torproject.org/pipermail/tor-relays/2017-December/013946.html
-172.98.193.43:80 orport=443 id=5E56738E7F97AA81DEEF59AF28494293DFBFCCDF # Backplane
-
-# Email sent directly to teor
-62.210.254.132:80 orport=443 id=8456DFA94161CDD99E480C2A2992C366C6564410 # turingmachine
-
-# https://lists.torproject.org/pipermail/tor-relays/2017-December/013960.html
-51.15.205.214:9030 orport=9001 id=8B6556601612F1E2AFCE2A12FFFAF8482A76DD1F ipv6=[2001:bc8:4400:2500::5:b07]:9001 # titania1
-51.15.205.214:9031 orport=9002 id=5E363D72488276160D062DDD2DFA25CFEBAF5EA9 ipv6=[2001:bc8:4400:2500::5:b07]:9002 # titania2
-
-# https://lists.torproject.org/pipermail/tor-relays/2017-December/014000.html
-24.117.231.229:34175 orport=45117 id=CE24412AD69444954B4015E293AE53DDDAFEA3D6 # Anosognosia
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-January/014012.html
-128.31.0.13:80 orport=443 id=A53C46F5B157DD83366D45A8E99A244934A14C46 # csailmitexit
-
-# Email sent directly to teor
-82.247.103.117:110 orport=995 id=C9B3C1661A9577BA24C1C2C6123918921A495509 # Casper01
-109.238.2.79:110 orport=995 id=7520892E3DD133D0B0464D01A158B54B8E2A8B75 # Casper02
-51.15.179.153:110 orport=995 id=BB60F5BA113A0B8B44B7B37DE3567FE561E92F78 # Casper04
-
-# Email sent directly to teor
-80.127.107.179:80 orport=443 id=BC6B2E2F62ACC5EDECBABE64DA1E48F84DD98B78 ipv6=[2001:981:4a22:c::6]:443 # TVISION02
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-January/014020.html
-37.120.174.249:80 orport=443 id=11DF0017A43AF1F08825CD5D973297F81AB00FF3 ipv6=[2a03:4000:6:724c:df98:15f9:b34d:443]:443 # gGDHjdcC6zAlM8k08lX
-
-# These fallbacks opted-in in previous releases, then changed their details,
-# and so we blacklisted them. Now we want to whitelist changes.
-# Assume details update is permanent
-85.230.184.93:9030 orport=443 id=855BC2DABE24C861CD887DB9B2E950424B49FC34 # Logforme
-176.31.180.157:143 orport=22 id=E781F4EC69671B3F1864AE2753E0890351506329 ipv6=[2001:41d0:8:eb9d::1]:22 # armbrust
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-January/014024.html
-82.161.212.209:9030 orport=9001 id=4E8CE6F5651E7342C1E7E5ED031E82078134FB0D ipv6=[2001:980:d7ed:1:ff:b0ff:fe00:d0b]:9001 # ymkeo
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-January/014055.html
-37.157.255.35:9030 orport=9090 id=361D33C96D0F161275EE67E2C91EE10B276E778B # cxx4freedom
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-January/014064.html
-87.118.122.120:80 orport=443 id=A2A6616723B511D8E068BB71705191763191F6B2 # otheontelth
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-January/014069.html
-185.100.86.182:9030 orport=8080 id=E51620B90DCB310138ED89EDEDD0A5C361AAE24E # NormalCitizen
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-January/014267.html
-51.15.72.211:80 orport=9001 id=D122094E396DF8BA560843E7B983B0EA649B7DF9 ipv6=[2001:bc8:4700:2300::1b:f09]:9001 # gjtorrelay
-
-# Email sent directly to Phoul
-185.34.33.2:9265 orport=31415 id=D71B1CA1C9DC7E8CA64158E106AD770A21160FEE # lqdn
-
-# Email sent directly to Phoul
-78.156.110.135:9091 orport=9090 id=F48FD1AED068496D51D1384BC7497C04E4985DA6 # SkynetC2
-
-# Email sent directly to Phoul
-5.200.21.144:80 orport=443 id=0C039F35C2E40DCB71CD8A07E97C7FD7787D42D6 # libel
-64.79.152.132:80 orport=443 id=375DCBB2DBD94E5263BC0C015F0C9E756669617E # ebola
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-June/015524.html
-132.248.241.5:9030 orport=9001 id=4661DE96D3F8E923994B05218F23760C8D7935A4
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-June/015522.html
-96.253.78.108:80 orport=442 id=924B24AFA7F075D059E8EEB284CC400B33D3D036
-
-# Email sent directly to Phoul
-163.172.218.10:9030 orport=9001 id=78809B6C50CB6491DB3A72C60EC39DC85BF72D1F ipv6=[2001:bc8:3f23:1100::1]:9001
-163.172.218.10:9130 orport=9101 id=B247BA9E0AEA93E6D7BF4080CFBB964034AF2B28 ipv6=[2001:bc8:3f23:1100::1]:9101
-
-# Email sent directly to Phoul
-158.255.212.178:8080 orport=8443 id=D941D380E5228E7B4D372AF4D484629A96DC48B9 ipv6=[2a03:f80:ed15:158:255:212:178:2]:8443
-
-# Email sent directly to Phoul
-45.79.108.130:9030 orport=9001 id=AEDAC7081AE14B8D241ECF0FF17A2858AB4383D0 ipv6=[2600:3c01:e000:131::8000:0]:9001
-
-# Email sent directly to Phoul
-51.254.147.57:80 orport=443 id=EB80A8D52F07238B576C42CEAB98ADD084EE075E
-217.182.51.248:80 orport=443 id=D6BA940D3255AB40DC5EE5B0B285FA143E1F9865
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-June/015541.html
-195.191.81.7:9030 orport=9001 id=41A3C16269C7B63DB6EB741DBDDB4E1F586B1592 ipv6=[2a00:1908:fffc:ffff:c0a6:ccff:fe62:e1a1]:9001
-51.254.96.208:9030 orport=9001 id=8101421BEFCCF4C271D5483C5AABCAAD245BBB9D ipv6=[2001:41d0:401:3100::30dc]:9001
-163.172.154.162:9030 orport=9001 id=F741E5124CB12700DA946B78C9B2DD175D6CD2A1 ipv6=[2001:bc8:4400:2100::17:419]:9001
-51.15.78.0:9030 orport=9001 id=15BE17C99FACE24470D40AF782D6A9C692AB36D6 ipv6=[2001:bc8:4700:2300::16:c0b]:9001
-54.37.139.118:9030 orport=9001 id=90A5D1355C4B5840E950EB61E673863A6AE3ACA1 ipv6=[2001:41d0:601:1100::1b8]:9001
-51.38.65.160:9030 orport=9001 id=3CB4193EF4E239FCEDC4DC43468E0B0D6B67ACC3 ipv6=[2001:41d0:801:2000::f6e]:9001
-
-# Email sent directly to Phoul
-54.37.138.138:8080 orport=993 id=1576BE143D8727745BB2BCDDF183291B3C3EFEFC
-
-# Email sent directly to Phoul
-67.215.255.140:9030 orport=9001 id=23917BB3F3994BC61F0C9D7AD19B069F9E150D26
-
-# Email sent directly to Phoul
-195.154.105.170:9030 orport=9001 id=E947C029087FA1C3499BEF5D4372947C51223D8F
-
-# Email sent directly to Phoul
-23.129.64.101:80 orport=443 id=2EB20285FE55927B7AECC47BB94F22534FBC3941 ipv6=[2620:18c:0:1001::101]:443
-23.129.64.102:80 orport=443 id=CA9739E2805A3CD73CF75BBCB6785C32394240E3 ipv6=[2620:18c:0:1001::102]:443
-23.129.64.103:80 orport=443 id=8ED84B53BD9556CCBB036073A1AD508EC27CBE52 ipv6=[2620:18c:0:1001::103]:443
-
-# Email sent directly to Phoul
-37.139.8.104:9030 orport=9001 id=7088D485934E8A403B81531F8C90BDC75FA43C98 ipv6=[2a03:b0c0:0:1010::24c:1001]:9001
-
-# Email sent directly to Phoul
-178.254.7.88:9030 orpport=9001 id=85A885433E50B1874F11CEC9BE98451E24660976
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-August/015869.html
-5.45.111.149:80 orport=443 id=D405FCCF06ADEDF898DF2F29C9348DCB623031BA ipv6=[2a03:4000:6:2388:df98:15f9:b34d:443]:443
-
-# https://trac.torproject.org/projects/tor/ticket/27297
-37.252.185.182:9030 orport=8080 id=113143469021882C3A4B82F084F8125B08EE471E ipv6=[2a00:63c1:a:182::2]:8080
-
-# Email sent directly to Phoul
-139.99.130.178:80 orport=443 id=867B95CACD64653FEEC4D2CEFC5C49B4620307A7
-
-# Email sent directly to Phoul
-104.131.11.214:9030 orport=8080 id=32828476F4F84E15C42B4C360A5CD8DE4C3C2BE7
-
-# Email sent directly to Phoul / Teor
-178.175.139.122:80 orport=443 id=490FB3FAAF8837407D94CA7E1DEF025DEF0F3516 ipv6=[2a00:1dc0:3002::3]:443
-
-# Email sent directly to Phoul
-192.42.116.16:80 orport=443 id=81B75D534F91BFB7C57AB67DA10BCEF622582AE8
-
-# https://lists.torproject.org/pipermail/tor-relays/2018-November/016610.html
-24.117.194.80:80 orport=443 id=B6C4C9A43658F686F8892CA5666717532F72979C
diff --git a/scripts/maint/generateFallbackDirLine.py b/scripts/maint/generateFallbackDirLine.py
deleted file mode 100755
index b856c938b..000000000
--- a/scripts/maint/generateFallbackDirLine.py
+++ /dev/null
@@ -1,38 +0,0 @@
-#!/usr/bin/env python
-
-# Generate a fallback directory whitelist/blacklist line for every fingerprint
-# passed as an argument.
-#
-# Usage:
-# generateFallbackDirLine.py fingerprint ...
-
-import sys
-import urllib2
-
-import stem.descriptor.remote
-import stem.util.tor_tools
-
-if len(sys.argv) <= 1:
-  print('Usage: %s fingerprint ...' % sys.argv[0])
-  sys.exit(1)
-
-for fingerprint in sys.argv[1:]:
-  if not stem.util.tor_tools.is_valid_fingerprint(fingerprint):
-    print("'%s' isn't a valid relay fingerprint" % fingerprint)
-    sys.exit(1)
-
-  try:
-    desc = stem.descriptor.remote.get_server_descriptors(fingerprint).run()[0]
-  except urllib2.HTTPError as exc:
-    if exc.code == 404:
-      print('# %s not found in recent descriptors' % fingerprint)
-      continue
-    else:
-      raise
-
-  if not desc.dir_port:
-    print("# %s needs a DirPort" % fingerprint)
-  else:
-    ipv6_addresses = [(address, port) for address, port, is_ipv6 in desc.or_addresses if is_ipv6]
-    ipv6_field = ' ipv6=[%s]:%s' % ipv6_addresses[0] if ipv6_addresses else ''
-    print('%s:%s orport=%s id=%s%s # %s' % (desc.address, desc.dir_port, desc.or_port, fingerprint, ipv6_field, desc.nickname))
diff --git a/scripts/maint/lookupFallbackDirContact.py b/scripts/maint/lookupFallbackDirContact.py
deleted file mode 100755
index 14c53d128..000000000
--- a/scripts/maint/lookupFallbackDirContact.py
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/usr/bin/env python
-
-# Lookup fallback directory contact lines for every fingerprint passed as an
-# argument.
-#
-# Usage:
-# lookupFallbackDirContact.py fingerprint ...
-
-import sys
-
-import stem.descriptor.remote as remote
-
-if len(sys.argv) <= 1:
-  print "Usage: {} fingerprint ...".format(sys.argv[0])
-  sys.exit(-1)
-
-# we need descriptors, because the consensus does not have contact infos
-descriptor_list = remote.get_server_descriptors(fingerprints=sys.argv[1:]).run()
-
-descriptor_list_fingerprints = []
-for d in descriptor_list:
-  assert d.fingerprint in sys.argv[1:]
-  descriptor_list_fingerprints.append(d.fingerprint)
-  print "{} {}".format(d.fingerprint, d.contact)
-
-for fingerprint in sys.argv[1:]:
-  if fingerprint not in descriptor_list_fingerprints:
-    print "{} not found in current descriptors".format(fingerprint)
diff --git a/scripts/maint/updateFallbackDirs.py b/scripts/maint/updateFallbackDirs.py
deleted file mode 100755
index 930a0a727..000000000
--- a/scripts/maint/updateFallbackDirs.py
+++ /dev/null
@@ -1,2383 +0,0 @@
-#!/usr/bin/env python
-
-# Usage:
-#
-# Regenerate the list:
-# scripts/maint/updateFallbackDirs.py > src/app/config/fallback_dirs.inc 2> fallback_dirs.log
-#
-# Check the existing list:
-# scripts/maint/updateFallbackDirs.py check_existing > fallback_dirs.inc.ok 2> fallback_dirs.log
-# mv fallback_dirs.inc.ok src/app/config/fallback_dirs.inc
-#
-# This script should be run from a stable, reliable network connection,
-# with no other network activity (and not over tor).
-# If this is not possible, please disable:
-# PERFORM_IPV4_DIRPORT_CHECKS and PERFORM_IPV6_DIRPORT_CHECKS
-#
-# Needs dateutil, stem, and potentially other python packages.
-# Optionally uses ipaddress (python 3 builtin) or py2-ipaddress (package)
-# for netblock analysis.
-#
-# After running this script, read the logs to make sure the fallbacks aren't
-# dominated by a single netblock or port.
-
-# Script by weasel, April 2015
-# Portions by gsathya & karsten, 2013
-# https://trac.torproject.org/projects/tor/attachment/ticket/8374/dir_list.2.py
-# Modifications by teor, 2015
-
-import StringIO
-import string
-import re
-import datetime
-import gzip
-import os.path
-import json
-import math
-import sys
-import urllib
-import urllib2
-import hashlib
-import dateutil.parser
-import copy
-import re
-
-from stem.descriptor import DocumentHandler
-from stem.descriptor.remote import get_consensus, get_server_descriptors, MAX_FINGERPRINTS
-
-import logging
-logging.root.name = ''
-
-HAVE_IPADDRESS = False
-try:
-  # python 3 builtin, or install package py2-ipaddress
-  # there are several ipaddress implementations for python 2
-  # with slightly different semantics with str typed text
-  # fortunately, all our IP addresses are in unicode
-  import ipaddress
-  HAVE_IPADDRESS = True
-except ImportError:
-  # if this happens, we avoid doing netblock analysis
-  logging.warning('Unable to import ipaddress, please install py2-ipaddress.' +
-                  ' A fallback list will be created, but optional netblock' +
-                  ' analysis will not be performed.')
-
-## Top-Level Configuration
-
-# We use semantic versioning: https://semver.org
-# In particular:
-# * major changes include removing a mandatory field, or anything else that
-#   would break an appropriately tolerant parser,
-# * minor changes include adding a field,
-# * patch changes include changing header comments or other unstructured
-#   content
-FALLBACK_FORMAT_VERSION = '2.0.0'
-SECTION_SEPARATOR_BASE = '====='
-SECTION_SEPARATOR_COMMENT = '/* ' + SECTION_SEPARATOR_BASE + ' */'
-
-# Output all candidate fallbacks, or only output selected fallbacks?
-OUTPUT_CANDIDATES = False
-
-# Perform DirPort checks over IPv4?
-# Change this to False if IPv4 doesn't work for you, or if you don't want to
-# download a consensus for each fallback
-# Don't check ~1000 candidates when OUTPUT_CANDIDATES is True
-PERFORM_IPV4_DIRPORT_CHECKS = False if OUTPUT_CANDIDATES else True
-
-# Perform DirPort checks over IPv6?
-# If you know IPv6 works for you, set this to True
-# This will exclude IPv6 relays without an IPv6 DirPort configured
-# So it's best left at False until #18394 is implemented
-# Don't check ~1000 candidates when OUTPUT_CANDIDATES is True
-PERFORM_IPV6_DIRPORT_CHECKS = False if OUTPUT_CANDIDATES else False
-
-# Must relays be running now?
-MUST_BE_RUNNING_NOW = (PERFORM_IPV4_DIRPORT_CHECKS
-                       or PERFORM_IPV6_DIRPORT_CHECKS)
-
-# Clients have been using microdesc consensuses by default for a while now
-DOWNLOAD_MICRODESC_CONSENSUS = True
-
-# If a relay delivers an invalid consensus, if it will become valid less than
-# this many seconds in the future, or expired less than this many seconds ago,
-# accept the relay as a fallback. For the consensus expiry check to be
-# accurate, the machine running this script needs an accurate clock.
-#
-# Relays on 0.3.0 and later return a 404 when they are about to serve a
-# consensus that expired more than 24 hours ago. 0.2.9 and earlier relays
-# will serve consensuses that are very old.
-#
-# Relays on 0.3.5.6-rc? and later return a 404 when they are about to serve a
-# consensus that will become valid more than 24 hours in the future. Older
-# relays don't serve future consensuses.
-#
-# A 404 makes relays fail the download check. We use a tolerance of 24 hours,
-# so that 0.2.9 relays also fail the download check if they serve a consensus
-# that is not reasonably live.
-#
-# REASONABLY_LIVE_TIME should never be more than Tor's REASONABLY_LIVE_TIME,
-# (24 hours), because clients reject consensuses that are older than that.
-# Clients on 0.3.5.5-alpha? and earlier also won't select guards from
-# consensuses that have expired, but can bootstrap if they already have guards
-# in their state file.
-REASONABLY_LIVE_TIME = 24*60*60
-
-# Output fallback name, flags, bandwidth, and ContactInfo in a C comment?
-OUTPUT_COMMENTS = True if OUTPUT_CANDIDATES else False
-
-# Output matching ContactInfo in fallbacks list?
-# Useful if you're trying to contact operators
-CONTACT_COUNT = True if OUTPUT_CANDIDATES else False
-
-# How the list should be sorted:
-# fingerprint: is useful for stable diffs of fallback lists
-# measured_bandwidth: is useful when pruning the list based on bandwidth
-# contact: is useful for contacting operators once the list has been pruned
-OUTPUT_SORT_FIELD = 'contact' if OUTPUT_CANDIDATES else 'fingerprint'
-
-## OnionOO Settings
-
-ONIONOO = 'https://onionoo.torproject.org/'
-#ONIONOO = 'https://onionoo.thecthulhu.com/'
-
-# Don't bother going out to the Internet, just use the files available locally,
-# even if they're very old
-LOCAL_FILES_ONLY = False
-
-## Whitelist / Blacklist Filter Settings
-
-# The whitelist contains entries that are included if all attributes match
-# (IPv4, dirport, orport, id, and optionally IPv6 and IPv6 orport)
-
-# What happens to entries not in whitelist?
-# When True, they are included, when False, they are excluded
-INCLUDE_UNLISTED_ENTRIES = True if OUTPUT_CANDIDATES else False
-
-WHITELIST_FILE_NAME = 'scripts/maint/fallback.whitelist'
-FALLBACK_FILE_NAME  = 'src/app/config/fallback_dirs.inc'
-
-# The number of bytes we'll read from a filter file before giving up
-MAX_LIST_FILE_SIZE = 1024 * 1024
-
-## Eligibility Settings
-
-# Require fallbacks to have the same address and port for a set amount of time
-# We used to have this at 1 week, but that caused many fallback failures, which
-# meant that we had to rebuild the list more often. We want fallbacks to be
-# stable for 2 years, so we set it to a few months.
-#
-# If a relay changes address or port, that's it, it's not useful any more,
-# because clients can't find it
-ADDRESS_AND_PORT_STABLE_DAYS = 90
-# We ignore relays that have been down for more than this period
-MAX_DOWNTIME_DAYS = 0 if MUST_BE_RUNNING_NOW else 7
-# FallbackDirs must have a time-weighted-fraction that is greater than or
-# equal to:
-# Mirrors that are down half the time are still useful half the time
-CUTOFF_RUNNING = .50
-CUTOFF_V2DIR = .50
-# Guard flags are removed for some time after a relay restarts, so we ignore
-# the guard flag.
-CUTOFF_GUARD = .00
-# FallbackDirs must have a time-weighted-fraction that is less than or equal
-# to:
-# .00 means no bad exits
-PERMITTED_BADEXIT = .00
-
-# older entries' weights are adjusted with ALPHA^(age in days)
-AGE_ALPHA = 0.99
-
-# this factor is used to scale OnionOO entries to [0,1]
-ONIONOO_SCALE_ONE = 999.
-
-## Fallback Count Limits
-
-# The target for these parameters is 20% of the guards in the network
-# This is around 200 as of October 2015
-_FB_POG = 0.2
-FALLBACK_PROPORTION_OF_GUARDS = None if OUTPUT_CANDIDATES else _FB_POG
-
-# Limit the number of fallbacks (eliminating lowest by advertised bandwidth)
-MAX_FALLBACK_COUNT = None if OUTPUT_CANDIDATES else 200
-# Emit a C #error if the number of fallbacks is less than expected
-MIN_FALLBACK_COUNT = 0 if OUTPUT_CANDIDATES else MAX_FALLBACK_COUNT*0.5
-
-# The maximum number of fallbacks on the same address, contact, or family
-#
-# With 150 fallbacks, this means each operator sees 5% of client bootstraps.
-# For comparison:
-#  - We try to limit guard and exit operators to 5% of the network
-#  - The directory authorities used to see 11% of client bootstraps each
-#
-# We also don't want too much of the list to go down if a single operator
-# has to move all their relays.
-MAX_FALLBACKS_PER_IP = 1
-MAX_FALLBACKS_PER_IPV4 = MAX_FALLBACKS_PER_IP
-MAX_FALLBACKS_PER_IPV6 = MAX_FALLBACKS_PER_IP
-MAX_FALLBACKS_PER_CONTACT = 7
-MAX_FALLBACKS_PER_FAMILY = 7
-
-## Fallback Bandwidth Requirements
-
-# Any fallback with the Exit flag has its bandwidth multiplied by this fraction
-# to make sure we aren't further overloading exits
-# (Set to 1.0, because we asked that only lightly loaded exits opt-in,
-# and the extra load really isn't that much for large relays.)
-EXIT_BANDWIDTH_FRACTION = 1.0
-
-# If a single fallback's bandwidth is too low, it's pointless adding it
-# We expect fallbacks to handle an extra 10 kilobytes per second of traffic
-# Make sure they can support fifty times the expected extra load
-#
-# We convert this to a consensus weight before applying the filter,
-# because all the bandwidth amounts are specified by the relay
-MIN_BANDWIDTH = 50.0 * 10.0 * 1024.0
-
-# Clients will time out after 30 seconds trying to download a consensus
-# So allow fallback directories half that to deliver a consensus
-# The exact download times might change based on the network connection
-# running this script, but only by a few seconds
-# There is also about a second of python overhead
-CONSENSUS_DOWNLOAD_SPEED_MAX = 15.0
-# If the relay fails a consensus check, retry the download
-# This avoids delisting a relay due to transient network conditions
-CONSENSUS_DOWNLOAD_RETRY = True
-
-## Parsing Functions
-
-def parse_ts(t):
-  return datetime.datetime.strptime(t, "%Y-%m-%d %H:%M:%S")
-
-def remove_bad_chars(raw_string, bad_char_list):
-  # Remove each character in the bad_char_list
-  cleansed_string = raw_string
-  for c in bad_char_list:
-    cleansed_string = cleansed_string.replace(c, '')
-  return cleansed_string
-
-def cleanse_unprintable(raw_string):
-  # Remove all unprintable characters
-  cleansed_string = ''
-  for c in raw_string:
-    if c in string.printable:
-      cleansed_string += c
-  return cleansed_string
-
-def cleanse_whitespace(raw_string):
-  # Replace all whitespace characters with a space
-  cleansed_string = raw_string
-  for c in string.whitespace:
-    cleansed_string = cleansed_string.replace(c, ' ')
-  return cleansed_string
-
-def cleanse_c_multiline_comment(raw_string):
-  cleansed_string = raw_string
-  # Embedded newlines should be removed by tor/onionoo, but let's be paranoid
-  cleansed_string = cleanse_whitespace(cleansed_string)
-  # ContactInfo and Version can be arbitrary binary data
-  cleansed_string = cleanse_unprintable(cleansed_string)
-  # Prevent a malicious / unanticipated string from breaking out
-  # of a C-style multiline comment
-  # This removes '/*' and '*/' and '//'
-  bad_char_list = '*/'
-  # Prevent a malicious string from using C nulls
-  bad_char_list += '\0'
-  # Avoid confusing parsers by making sure there is only one comma per fallback
-  bad_char_list += ','
-  # Avoid confusing parsers by making sure there is only one equals per field
-  bad_char_list += '='
-  # Be safer by removing bad characters entirely
-  cleansed_string = remove_bad_chars(cleansed_string, bad_char_list)
-  # Some compilers may further process the content of comments
-  # There isn't much we can do to cover every possible case
-  # But comment-based directives are typically only advisory
-  return cleansed_string
-
-def cleanse_c_string(raw_string):
-  cleansed_string = raw_string
-  # Embedded newlines should be removed by tor/onionoo, but let's be paranoid
-  cleansed_string = cleanse_whitespace(cleansed_string)
-  # ContactInfo and Version can be arbitrary binary data
-  cleansed_string = cleanse_unprintable(cleansed_string)
-  # Prevent a malicious address/fingerprint string from breaking out
-  # of a C-style string
-  bad_char_list = '"'
-  # Prevent a malicious string from using escapes
-  bad_char_list += '\\'
-  # Prevent a malicious string from using C nulls
-  bad_char_list += '\0'
-  # Avoid confusing parsers by making sure there is only one comma per fallback
-  bad_char_list += ','
-  # Avoid confusing parsers by making sure there is only one equals per field
-  bad_char_list += '='
-  # Be safer by removing bad characters entirely
-  cleansed_string = remove_bad_chars(cleansed_string, bad_char_list)
-  # Some compilers may further process the content of strings
-  # There isn't much we can do to cover every possible case
-  # But this typically only results in changes to the string data
-  return cleansed_string
-
-## OnionOO Source Functions
-
-# a dictionary of source metadata for each onionoo query we've made
-fetch_source = {}
-
-# register source metadata for 'what'
-# assumes we only retrieve one document for each 'what'
-def register_fetch_source(what, url, relays_published, version):
-  fetch_source[what] = {}
-  fetch_source[what]['url'] = url
-  fetch_source[what]['relays_published'] = relays_published
-  fetch_source[what]['version'] = version
-
-# list each registered source's 'what'
-def fetch_source_list():
-  return sorted(fetch_source.keys())
-
-# given 'what', provide a multiline C comment describing the source
-def describe_fetch_source(what):
-  desc = '/*'
-  desc += '\n'
-  desc += 'Onionoo Source: '
-  desc += cleanse_c_multiline_comment(what)
-  desc += ' Date: '
-  desc += cleanse_c_multiline_comment(fetch_source[what]['relays_published'])
-  desc += ' Version: '
-  desc += cleanse_c_multiline_comment(fetch_source[what]['version'])
-  desc += '\n'
-  desc += 'URL: '
-  desc += cleanse_c_multiline_comment(fetch_source[what]['url'])
-  desc += '\n'
-  desc += '*/'
-  return desc
-
-## File Processing Functions
-
-def write_to_file(str, file_name, max_len):
-  try:
-    with open(file_name, 'w') as f:
-      f.write(str[0:max_len])
-  except EnvironmentError, error:
-    logging.error('Writing file %s failed: %d: %s'%
-                  (file_name,
-                   error.errno,
-                   error.strerror)
-                  )
-
-def read_from_file(file_name, max_len):
-  try:
-    if os.path.isfile(file_name):
-      with open(file_name, 'r') as f:
-        return f.read(max_len)
-  except EnvironmentError, error:
-    logging.info('Loading file %s failed: %d: %s'%
-                 (file_name,
-                  error.errno,
-                  error.strerror)
-                 )
-  return None
-
-def parse_fallback_file(file_name):
-  file_data = read_from_file(file_name, MAX_LIST_FILE_SIZE)
-  file_data = cleanse_unprintable(file_data)
-  file_data = remove_bad_chars(file_data, '\n"\0')
-  file_data = re.sub('/\*.*?\*/', '', file_data)
-  file_data = file_data.replace(',', '\n')
-  file_data = file_data.replace(' weight=10', '')
-  return file_data
-
-def load_possibly_compressed_response_json(response):
-    if response.info().get('Content-Encoding') == 'gzip':
-      buf = StringIO.StringIO( response.read() )
-      f = gzip.GzipFile(fileobj=buf)
-      return json.load(f)
-    else:
-      return json.load(response)
-
-def load_json_from_file(json_file_name):
-    # An exception here may be resolved by deleting the .last_modified
-    # and .json files, and re-running the script
-    try:
-      with open(json_file_name, 'r') as f:
-        return json.load(f)
-    except EnvironmentError, error:
-      raise Exception('Reading not-modified json file %s failed: %d: %s'%
-                    (json_file_name,
-                     error.errno,
-                     error.strerror)
-                    )
-
-## OnionOO Functions
-
-def datestr_to_datetime(datestr):
-  # Parse datetimes like: Fri, 02 Oct 2015 13:34:14 GMT
-  if datestr is not None:
-    dt = dateutil.parser.parse(datestr)
-  else:
-    # Never modified - use start of epoch
-    dt = datetime.datetime.utcfromtimestamp(0)
-  # strip any timezone out (in case they're supported in future)
-  dt = dt.replace(tzinfo=None)
-  return dt
-
-def onionoo_fetch(what, **kwargs):
-  params = kwargs
-  params['type'] = 'relay'
-  #params['limit'] = 10
-  params['first_seen_days'] = '%d-'%(ADDRESS_AND_PORT_STABLE_DAYS)
-  params['last_seen_days'] = '-%d'%(MAX_DOWNTIME_DAYS)
-  params['flag'] = 'V2Dir'
-  url = ONIONOO + what + '?' + urllib.urlencode(params)
-
-  # Unfortunately, the URL is too long for some OS filenames,
-  # but we still don't want to get files from different URLs mixed up
-  base_file_name = what + '-' + hashlib.sha1(url).hexdigest()
-
-  full_url_file_name = base_file_name + '.full_url'
-  MAX_FULL_URL_LENGTH = 1024
-
-  last_modified_file_name = base_file_name + '.last_modified'
-  MAX_LAST_MODIFIED_LENGTH = 64
-
-  json_file_name = base_file_name + '.json'
-
-  if LOCAL_FILES_ONLY:
-    # Read from the local file, don't write to anything
-    response_json = load_json_from_file(json_file_name)
-  else:
-    # store the full URL to a file for debugging
-    # no need to compare as long as you trust SHA-1
-    write_to_file(url, full_url_file_name, MAX_FULL_URL_LENGTH)
-
-    request = urllib2.Request(url)
-    request.add_header('Accept-encoding', 'gzip')
-
-    # load the last modified date from the file, if it exists
-    last_mod_date = read_from_file(last_modified_file_name,
-                                   MAX_LAST_MODIFIED_LENGTH)
-    if last_mod_date is not None:
-      request.add_header('If-modified-since', last_mod_date)
-
-    # Parse last modified date
-    last_mod = datestr_to_datetime(last_mod_date)
-
-    # Not Modified and still recent enough to be useful
-    # Onionoo / Globe used to use 6 hours, but we can afford a day
-    required_freshness = datetime.datetime.utcnow()
-    # strip any timezone out (to match dateutil.parser)
-    required_freshness = required_freshness.replace(tzinfo=None)
-    required_freshness -= datetime.timedelta(hours=24)
-
-    # Make the OnionOO request
-    response_code = 0
-    try:
-      response = urllib2.urlopen(request)
-      response_code = response.getcode()
-    except urllib2.HTTPError, error:
-      response_code = error.code
-      if response_code == 304: # not modified
-        pass
-      else:
-        raise Exception("Could not get " + url + ": "
-                        + str(error.code) + ": " + error.reason)
-
-    if response_code == 200: # OK
-      last_mod = datestr_to_datetime(response.info().get('Last-Modified'))
-
-    # Check for freshness
-    if last_mod < required_freshness:
-      if last_mod_date is not None:
-        # This check sometimes fails transiently, retry the script if it does
-        date_message = "Outdated data: last updated " + last_mod_date
-      else:
-        date_message = "No data: never downloaded "
-      raise Exception(date_message + " from " + url)
-
-    # Process the data
-    if response_code == 200: # OK
-
-      response_json = load_possibly_compressed_response_json(response)
-
-      with open(json_file_name, 'w') as f:
-        # use the most compact json representation to save space
-        json.dump(response_json, f, separators=(',',':'))
-
-      # store the last modified date in its own file
-      if response.info().get('Last-modified') is not None:
-        write_to_file(response.info().get('Last-Modified'),
-                      last_modified_file_name,
-                      MAX_LAST_MODIFIED_LENGTH)
-
-    elif response_code == 304: # Not Modified
-
-      response_json = load_json_from_file(json_file_name)
-
-    else: # Unexpected HTTP response code not covered in the HTTPError above
-      raise Exception("Unexpected HTTP response code to " + url + ": "
-                      + str(response_code))
-
-  register_fetch_source(what,
-                        url,
-                        response_json['relays_published'],
-                        response_json['version'])
-
-  return response_json
-
-def fetch(what, **kwargs):
-  #x = onionoo_fetch(what, **kwargs)
-  # don't use sort_keys, as the order of or_addresses is significant
-  #print json.dumps(x, indent=4, separators=(',', ': '))
-  #sys.exit(0)
-
-  return onionoo_fetch(what, **kwargs)
-
-## Fallback Candidate Class
-
-class Candidate(object):
-  CUTOFF_ADDRESS_AND_PORT_STABLE = (datetime.datetime.utcnow()
-                            - datetime.timedelta(ADDRESS_AND_PORT_STABLE_DAYS))
-
-  def __init__(self, details):
-    for f in ['fingerprint', 'nickname', 'last_changed_address_or_port',
-              'consensus_weight', 'or_addresses', 'dir_address']:
-      if not f in details: raise Exception("Document has no %s field."%(f,))
-
-    if not 'contact' in details:
-      details['contact'] = None
-    if not 'flags' in details or details['flags'] is None:
-      details['flags'] = []
-    if (not 'advertised_bandwidth' in details
-        or details['advertised_bandwidth'] is None):
-      # relays without advertised bandwidth have it calculated from their
-      # consensus weight
-      details['advertised_bandwidth'] = 0
-    if (not 'effective_family' in details
-        or details['effective_family'] is None):
-      details['effective_family'] = []
-    if not 'platform' in details:
-      details['platform'] = None
-    details['last_changed_address_or_port'] = parse_ts(
-                                      details['last_changed_address_or_port'])
-    self._data = details
-    self._stable_sort_or_addresses()
-
-    self._fpr = self._data['fingerprint']
-    self._running = self._guard = self._v2dir = 0.
-    self._split_dirport()
-    self._compute_orport()
-    if self.orport is None:
-      raise Exception("Failed to get an orport for %s."%(self._fpr,))
-    self._compute_ipv6addr()
-    if not self.has_ipv6():
-      logging.debug("Failed to get an ipv6 address for %s."%(self._fpr,))
-    self._compute_version()
-    self._extra_info_cache = None
-
-  def _stable_sort_or_addresses(self):
-    # replace self._data['or_addresses'] with a stable ordering,
-    # sorting the secondary addresses in string order
-    # leave the received order in self._data['or_addresses_raw']
-    self._data['or_addresses_raw'] = self._data['or_addresses']
-    or_address_primary = self._data['or_addresses'][:1]
-    # subsequent entries in the or_addresses array are in an arbitrary order
-    # so we stabilise the addresses by sorting them in string order
-    or_addresses_secondaries_stable = sorted(self._data['or_addresses'][1:])
-    or_addresses_stable = or_address_primary + or_addresses_secondaries_stable
-    self._data['or_addresses'] = or_addresses_stable
-
-  def get_fingerprint(self):
-    return self._fpr
-
-  # is_valid_ipv[46]_address by gsathya, karsten, 2013
-  @staticmethod
-  def is_valid_ipv4_address(address):
-    if not isinstance(address, (str, unicode)):
-      return False
-
-    # check if there are four period separated values
-    if address.count(".") != 3:
-      return False
-
-    # checks that each value in the octet are decimal values between 0-255
-    for entry in address.split("."):
-      if not entry.isdigit() or int(entry) < 0 or int(entry) > 255:
-        return False
-      elif entry[0] == "0" and len(entry) > 1:
-        return False  # leading zeros, for instance in "1.2.3.001"
-
-    return True
-
-  @staticmethod
-  def is_valid_ipv6_address(address):
-    if not isinstance(address, (str, unicode)):
-      return False
-
-    # remove brackets
-    address = address[1:-1]
-
-    # addresses are made up of eight colon separated groups of four hex digits
-    # with leading zeros being optional
-    # https://en.wikipedia.org/wiki/IPv6#Address_format
-
-    colon_count = address.count(":")
-
-    if colon_count > 7:
-      return False  # too many groups
-    elif colon_count != 7 and not "::" in address:
-      return False  # not enough groups and none are collapsed
-    elif address.count("::") > 1 or ":::" in address:
-      return False  # multiple groupings of zeros can't be collapsed
-
-    found_ipv4_on_previous_entry = False
-    for entry in address.split(":"):
-      # If an IPv6 address has an embedded IPv4 address,
-      # it must be the last entry
-      if found_ipv4_on_previous_entry:
-        return False
-      if not re.match("^[0-9a-fA-f]{0,4}$", entry):
-        if not Candidate.is_valid_ipv4_address(entry):
-          return False
-        else:
-          found_ipv4_on_previous_entry = True
-
-    return True
-
-  def _split_dirport(self):
-    # Split the dir_address into dirip and dirport
-    (self.dirip, _dirport) = self._data['dir_address'].split(':', 2)
-    self.dirport = int(_dirport)
-
-  def _compute_orport(self):
-    # Choose the first ORPort that's on the same IPv4 address as the DirPort.
-    # In rare circumstances, this might not be the primary ORPort address.
-    # However, _stable_sort_or_addresses() ensures we choose the same one
-    # every time, even if onionoo changes the order of the secondaries.
-    self._split_dirport()
-    self.orport = None
-    for i in self._data['or_addresses']:
-      if i != self._data['or_addresses'][0]:
-        logging.debug('Secondary IPv4 Address Used for %s: %s'%(self._fpr, i))
-      (ipaddr, port) = i.rsplit(':', 1)
-      if (ipaddr == self.dirip) and Candidate.is_valid_ipv4_address(ipaddr):
-        self.orport = int(port)
-        return
-
-  def _compute_ipv6addr(self):
-    # Choose the first IPv6 address that uses the same port as the ORPort
-    # Or, choose the first IPv6 address in the list
-    # _stable_sort_or_addresses() ensures we choose the same IPv6 address
-    # every time, even if onionoo changes the order of the secondaries.
-    self.ipv6addr = None
-    self.ipv6orport = None
-    # Choose the first IPv6 address that uses the same port as the ORPort
-    for i in self._data['or_addresses']:
-      (ipaddr, port) = i.rsplit(':', 1)
-      if (port == self.orport) and Candidate.is_valid_ipv6_address(ipaddr):
-        self.ipv6addr = ipaddr
-        self.ipv6orport = int(port)
-        return
-    # Choose the first IPv6 address in the list
-    for i in self._data['or_addresses']:
-      (ipaddr, port) = i.rsplit(':', 1)
-      if Candidate.is_valid_ipv6_address(ipaddr):
-        self.ipv6addr = ipaddr
-        self.ipv6orport = int(port)
-        return
-
-  def _compute_version(self):
-    # parse the version out of the platform string
-    # The platform looks like: "Tor 0.2.7.6 on Linux"
-    self._data['version'] = None
-    if self._data['platform'] is None:
-      return
-    # be tolerant of weird whitespacing, use a whitespace split
-    tokens = self._data['platform'].split()
-    for token in tokens:
-      vnums = token.split('.')
-      # if it's at least a.b.c.d, with potentially an -alpha-dev, -alpha, -rc
-      if (len(vnums) >= 4 and vnums[0].isdigit() and vnums[1].isdigit() and
-          vnums[2].isdigit()):
-        self._data['version'] = token
-        return
-
-  # From #20509
-  # bug #20499 affects versions from 0.2.9.1-alpha-dev to 0.2.9.4-alpha-dev
-  # and version 0.3.0.0-alpha-dev
-  # Exhaustive lists are hard to get wrong
-  STALE_CONSENSUS_VERSIONS = ['0.2.9.1-alpha-dev',
-                              '0.2.9.2-alpha',
-                              '0.2.9.2-alpha-dev',
-                              '0.2.9.3-alpha',
-                              '0.2.9.3-alpha-dev',
-                              '0.2.9.4-alpha',
-                              '0.2.9.4-alpha-dev',
-                              '0.3.0.0-alpha-dev'
-                              ]
-
-  def is_valid_version(self):
-    # call _compute_version before calling this
-    # is the version of the relay a version we want as a fallback?
-    # checks both recommended versions and bug #20499 / #20509
-    #
-    # if the relay doesn't have a recommended version field, exclude the relay
-    if not self._data.has_key('recommended_version'):
-      log_excluded('%s not a candidate: no recommended_version field',
-                   self._fpr)
-      return False
-    if not self._data['recommended_version']:
-      log_excluded('%s not a candidate: version not recommended', self._fpr)
-      return False
-    # if the relay doesn't have version field, exclude the relay
-    if not self._data.has_key('version'):
-      log_excluded('%s not a candidate: no version field', self._fpr)
-      return False
-    if self._data['version'] in Candidate.STALE_CONSENSUS_VERSIONS:
-      logging.warning('%s not a candidate: version delivers stale consensuses',
-                      self._fpr)
-      return False
-    return True
-
-  @staticmethod
-  def _extract_generic_history(history, which='unknown'):
-    # given a tree like this:
-    #   {
-    #     "1_month": {
-    #         "count": 187,
-    #         "factor": 0.001001001001001001,
-    #         "first": "2015-02-27 06:00:00",
-    #         "interval": 14400,
-    #         "last": "2015-03-30 06:00:00",
-    #         "values": [
-    #             999,
-    #             999
-    #         ]
-    #     },
-    #     "1_week": {
-    #         "count": 169,
-    #         "factor": 0.001001001001001001,
-    #         "first": "2015-03-23 07:30:00",
-    #         "interval": 3600,
-    #         "last": "2015-03-30 07:30:00",
-    #         "values": [ ...]
-    #     },
-    #     "1_year": {
-    #         "count": 177,
-    #         "factor": 0.001001001001001001,
-    #         "first": "2014-04-11 00:00:00",
-    #         "interval": 172800,
-    #         "last": "2015-03-29 00:00:00",
-    #         "values": [ ...]
-    #     },
-    #     "3_months": {
-    #         "count": 185,
-    #         "factor": 0.001001001001001001,
-    #         "first": "2014-12-28 06:00:00",
-    #         "interval": 43200,
-    #         "last": "2015-03-30 06:00:00",
-    #         "values": [ ...]
-    #     }
-    #   },
-    # extract exactly one piece of data per time interval,
-    # using smaller intervals where available.
-    #
-    # returns list of (age, length, value) dictionaries.
-
-    generic_history = []
-
-    periods = history.keys()
-    periods.sort(key = lambda x: history[x]['interval'])
-    now = datetime.datetime.utcnow()
-    newest = now
-    for p in periods:
-      h = history[p]
-      interval = datetime.timedelta(seconds = h['interval'])
-      this_ts = parse_ts(h['last'])
-
-      if (len(h['values']) != h['count']):
-        logging.warning('Inconsistent value count in %s document for %s'
-                        %(p, which))
-      for v in reversed(h['values']):
-        if (this_ts <= newest):
-          agt1 = now - this_ts
-          agt2 = interval
-          agetmp1 = (agt1.microseconds + (agt1.seconds + agt1.days * 24 * 3600)
-                     * 10**6) / 10**6
-          agetmp2 = (agt2.microseconds + (agt2.seconds + agt2.days * 24 * 3600)
-                     * 10**6) / 10**6
-          generic_history.append(
-            { 'age': agetmp1,
-              'length': agetmp2,
-              'value': v
-            })
-          newest = this_ts
-        this_ts -= interval
-
-      if (this_ts + interval != parse_ts(h['first'])):
-        logging.warning('Inconsistent time information in %s document for %s'
-                        %(p, which))
-
-    #print json.dumps(generic_history, sort_keys=True,
-    #                  indent=4, separators=(',', ': '))
-    return generic_history
-
-  @staticmethod
-  def _avg_generic_history(generic_history):
-    a = []
-    for i in generic_history:
-      if i['age'] > (ADDRESS_AND_PORT_STABLE_DAYS * 24 * 3600):
-        continue
-      if (i['length'] is not None
-          and i['age'] is not None
-          and i['value'] is not None):
-        w = i['length'] * math.pow(AGE_ALPHA, i['age']/(3600*24))
-        a.append( (i['value'] * w, w) )
-
-    sv = math.fsum(map(lambda x: x[0], a))
-    sw = math.fsum(map(lambda x: x[1], a))
-
-    if sw == 0.0:
-      svw = 0.0
-    else:
-      svw = sv/sw
-    return svw
-
-  def _add_generic_history(self, history):
-    periods = r['read_history'].keys()
-    periods.sort(key = lambda x: r['read_history'][x]['interval'] )
-
-    print periods
-
-  def add_running_history(self, history):
-    pass
-
-  def add_uptime(self, uptime):
-    logging.debug('Adding uptime %s.'%(self._fpr,))
-
-    # flags we care about: Running, V2Dir, Guard
-    if not 'flags' in uptime:
-      logging.debug('No flags in document for %s.'%(self._fpr,))
-      return
-
-    for f in ['Running', 'Guard', 'V2Dir']:
-      if not f in uptime['flags']:
-        logging.debug('No %s in flags for %s.'%(f, self._fpr,))
-        return
-
-    running = self._extract_generic_history(uptime['flags']['Running'],
-                                            '%s-Running'%(self._fpr))
-    guard = self._extract_generic_history(uptime['flags']['Guard'],
-                                          '%s-Guard'%(self._fpr))
-    v2dir = self._extract_generic_history(uptime['flags']['V2Dir'],
-                                          '%s-V2Dir'%(self._fpr))
-    if 'BadExit' in uptime['flags']:
-      badexit = self._extract_generic_history(uptime['flags']['BadExit'],
-                                              '%s-BadExit'%(self._fpr))
-
-    self._running = self._avg_generic_history(running) / ONIONOO_SCALE_ONE
-    self._guard = self._avg_generic_history(guard) / ONIONOO_SCALE_ONE
-    self._v2dir = self._avg_generic_history(v2dir) / ONIONOO_SCALE_ONE
-    self._badexit = None
-    if 'BadExit' in uptime['flags']:
-      self._badexit = self._avg_generic_history(badexit) / ONIONOO_SCALE_ONE
-
-  def is_candidate(self):
-    try:
-      if (MUST_BE_RUNNING_NOW and not self.is_running()):
-        log_excluded('%s not a candidate: not running now, unable to check ' +
-                     'DirPort consensus download', self._fpr)
-        return False
-      if (self._data['last_changed_address_or_port'] >
-          self.CUTOFF_ADDRESS_AND_PORT_STABLE):
-        log_excluded('%s not a candidate: changed address/port recently (%s)',
-                     self._fpr, self._data['last_changed_address_or_port'])
-        return False
-      if self._running < CUTOFF_RUNNING:
-        log_excluded('%s not a candidate: running avg too low (%lf)',
-                     self._fpr, self._running)
-        return False
-      if self._v2dir < CUTOFF_V2DIR:
-        log_excluded('%s not a candidate: v2dir avg too low (%lf)',
-                     self._fpr, self._v2dir)
-        return False
-      if self._badexit is not None and self._badexit > PERMITTED_BADEXIT:
-        log_excluded('%s not a candidate: badexit avg too high (%lf)',
-                     self._fpr, self._badexit)
-        return False
-      # this function logs a message depending on which check fails
-      if not self.is_valid_version():
-        return False
-      if self._guard < CUTOFF_GUARD:
-        log_excluded('%s not a candidate: guard avg too low (%lf)',
-                     self._fpr, self._guard)
-        return False
-      if (not self._data.has_key('consensus_weight')
-          or self._data['consensus_weight'] < 1):
-        log_excluded('%s not a candidate: consensus weight invalid', self._fpr)
-        return False
-    except BaseException as e:
-      logging.warning("Exception %s when checking if fallback is a candidate",
-                      str(e))
-      return False
-    return True
-
-  def id_matches(self, id, exact=False):
-    """ Does this fallback's id match id?
-        exact is ignored. """
-    return self._fpr == id
-
-  def ipv4_addr_matches(self, ipv4_addr, exact=False):
-    """ Does this fallback's IPv4 address match ipv4_addr?
-        exact is ignored. """
-    return self.dirip == ipv4_addr
-
-  def ipv4_dirport_matches(self, ipv4_dirport, exact=False):
-    """ Does this fallback's IPv4 dirport match ipv4_dirport?
-        If exact is False, always return True. """
-    if exact:
-      return self.dirport == int(ipv4_dirport)
-    else:
-      return True
-
-  def ipv4_and_dirport_matches(self, ipv4_addr, ipv4_dirport, exact=False):
-    """ Does this fallback's IPv4 address match ipv4_addr?
-        If exact is True, also check ipv4_dirport. """
-    ipv4_match = self.ipv4_addr_matches(ipv4_addr, exact=exact)
-    if exact:
-      return ipv4_match and self.ipv4_dirport_matches(ipv4_dirport,
-                                                      exact=exact)
-    else:
-      return ipv4_match
-
-  def ipv4_orport_matches(self, ipv4_orport, exact=False):
-    """ Does this fallback's IPv4 orport match ipv4_orport?
-        If exact is False, always return True. """
-    if exact:
-      return self.orport == int(ipv4_orport)
-    else:
-      return True
-
-  def ipv4_and_orport_matches(self, ipv4_addr, ipv4_orport, exact=False):
-    """ Does this fallback's IPv4 address match ipv4_addr?
-        If exact is True, also check ipv4_orport. """
-    ipv4_match = self.ipv4_addr_matches(ipv4_addr, exact=exact)
-    if exact:
-      return ipv4_match and self.ipv4_orport_matches(ipv4_orport,
-                                                     exact=exact)
-    else:
-      return ipv4_match
-
-  def ipv6_addr_matches(self, ipv6_addr, exact=False):
-    """ Does this fallback's IPv6 address match ipv6_addr?
-        Both addresses must be present to match.
-        exact is ignored. """
-    if self.has_ipv6() and ipv6_addr is not None:
-      # Check that we have a bracketed IPv6 address without a port
-      assert(ipv6_addr.startswith('[') and ipv6_addr.endswith(']'))
-      return self.ipv6addr == ipv6_addr
-    else:
-      return False
-
-  def ipv6_orport_matches(self, ipv6_orport, exact=False):
-    """ Does this fallback's IPv6 orport match ipv6_orport?
-        Both ports must be present to match.
-        If exact is False, always return True. """
-    if exact:
-      return (self.has_ipv6() and ipv6_orport is not None and
-              self.ipv6orport == int(ipv6_orport))
-    else:
-      return True
-
-  def ipv6_and_orport_matches(self, ipv6_addr, ipv6_orport, exact=False):
-    """ Does this fallback's IPv6 address match ipv6_addr?
-        If exact is True, also check ipv6_orport. """
-    ipv6_match = self.ipv6_addr_matches(ipv6_addr, exact=exact)
-    if exact:
-      return ipv6_match and self.ipv6_orport_matches(ipv6_orport,
-                                                     exact=exact)
-    else:
-      return ipv6_match
-
-  def entry_matches_exact(self, entry):
-    """ Is entry an exact match for this fallback?
-        A fallback is an exact match for entry if each key in entry matches:
-          ipv4
-          dirport
-          orport
-          id
-          ipv6 address and port (if present in the fallback or the whitelist)
-        If the fallback has an ipv6 key, the whitelist line must also have
-        it, otherwise they don't match.
-
-        Logs a warning-level message if the fallback would be an exact match,
-        but one of the id, ipv4, ipv4 orport, ipv4 dirport, or ipv6 orport
-        have changed. """
-    if not self.id_matches(entry['id'], exact=True):
-      # can't log here unless we match an IP and port, because every relay's
-      # fingerprint is compared to every entry's fingerprint
-      if self.ipv4_and_orport_matches(entry['ipv4'],
-                                      entry['orport'],
-                                      exact=True):
-        logging.warning('%s excluded: has OR %s:%d changed fingerprint to ' +
-                        '%s?', entry['id'], self.dirip, self.orport,
-                        self._fpr)
-      if self.ipv6_and_orport_matches(entry.get('ipv6_addr'),
-                                      entry.get('ipv6_orport'),
-                                      exact=True):
-        logging.warning('%s excluded: has OR %s changed fingerprint to ' +
-                        '%s?', entry['id'], entry['ipv6'], self._fpr)
-      return False
-    if not self.ipv4_addr_matches(entry['ipv4'], exact=True):
-      logging.warning('%s excluded: has it changed IPv4 from %s to %s?',
-                      self._fpr, entry['ipv4'], self.dirip)
-      return False
-    if not self.ipv4_dirport_matches(entry['dirport'], exact=True):
-      logging.warning('%s excluded: has it changed DirPort from %s:%d to ' +
-                      '%s:%d?', self._fpr, self.dirip, int(entry['dirport']),
-                      self.dirip, self.dirport)
-      return False
-    if not self.ipv4_orport_matches(entry['orport'], exact=True):
-      logging.warning('%s excluded: has it changed ORPort from %s:%d to ' +
-                      '%s:%d?', self._fpr, self.dirip, int(entry['orport']),
-                      self.dirip, self.orport)
-      return False
-    if entry.has_key('ipv6') and self.has_ipv6():
-      # if both entry and fallback have an ipv6 address, compare them
-      if not self.ipv6_and_orport_matches(entry['ipv6_addr'],
-                                          entry['ipv6_orport'],
-                                          exact=True):
-        logging.warning('%s excluded: has it changed IPv6 ORPort from %s ' +
-                        'to %s:%d?', self._fpr, entry['ipv6'],
-                        self.ipv6addr, self.ipv6orport)
-        return False
-    # if the fallback has an IPv6 address but the whitelist entry
-    # doesn't, or vice versa, the whitelist entry doesn't match
-    elif entry.has_key('ipv6') and not self.has_ipv6():
-      logging.warning('%s excluded: has it lost its former IPv6 address %s?',
-                      self._fpr, entry['ipv6'])
-      return False
-    elif not entry.has_key('ipv6') and self.has_ipv6():
-      logging.warning('%s excluded: has it gained an IPv6 address %s:%d?',
-                      self._fpr, self.ipv6addr, self.ipv6orport)
-      return False
-    return True
-
-  def entry_matches_fuzzy(self, entry):
-    """ Is entry a fuzzy match for this fallback?
-        A fallback is a fuzzy match for entry if at least one of these keys
-        in entry matches:
-          id
-          ipv4
-          ipv6 (if present in both the fallback and whitelist)
-        The ports and nickname are ignored. Missing or extra ipv6 addresses
-        are ignored.
-
-        Doesn't log any warning messages. """
-    if self.id_matches(entry['id'], exact=False):
-      return True
-    if self.ipv4_addr_matches(entry['ipv4'], exact=False):
-      return True
-    if entry.has_key('ipv6') and self.has_ipv6():
-      # if both entry and fallback have an ipv6 address, compare them
-      if self.ipv6_addr_matches(entry['ipv6_addr'], exact=False):
-        return True
-    return False
-
-  def is_in_whitelist(self, relaylist, exact=False):
-    """ If exact is True (existing fallback list), check if this fallback is
-        an exact match for any whitelist entry, using entry_matches_exact().
-
-        If exact is False (new fallback whitelist), check if this fallback is
-        a fuzzy match for any whitelist entry, using entry_matches_fuzzy(). """
-    for entry in relaylist:
-      if exact:
-        if self.entry_matches_exact(entry):
-          return True
-      else:
-        if self.entry_matches_fuzzy(entry):
-          return True
-    return False
-
-  def cw_to_bw_factor(self):
-    # any relays with a missing or zero consensus weight are not candidates
-    # any relays with a missing advertised bandwidth have it set to zero
-    return self._data['advertised_bandwidth'] / self._data['consensus_weight']
-
-  # since advertised_bandwidth is reported by the relay, it can be gamed
-  # to avoid this, use the median consensus weight to bandwidth factor to
-  # estimate this relay's measured bandwidth, and make that the upper limit
-  def measured_bandwidth(self, median_cw_to_bw_factor):
-    cw_to_bw= median_cw_to_bw_factor
-    # Reduce exit bandwidth to make sure we're not overloading them
-    if self.is_exit():
-      cw_to_bw *= EXIT_BANDWIDTH_FRACTION
-    measured_bandwidth = self._data['consensus_weight'] * cw_to_bw
-    if self._data['advertised_bandwidth'] != 0:
-      # limit advertised bandwidth (if available) to measured bandwidth
-      return min(measured_bandwidth, self._data['advertised_bandwidth'])
-    else:
-      return measured_bandwidth
-
-  def set_measured_bandwidth(self, median_cw_to_bw_factor):
-    self._data['measured_bandwidth'] = self.measured_bandwidth(
-                                                      median_cw_to_bw_factor)
-
-  def is_exit(self):
-    return 'Exit' in self._data['flags']
-
-  def is_guard(self):
-    return 'Guard' in self._data['flags']
-
-  def is_running(self):
-    return 'Running' in self._data['flags']
-
-  # does this fallback have an IPv6 address and orport?
-  def has_ipv6(self):
-    return self.ipv6addr is not None and self.ipv6orport is not None
-
-  # strip leading and trailing brackets from an IPv6 address
-  # safe to use on non-bracketed IPv6 and on IPv4 addresses
-  # also convert to unicode, and make None appear as ''
-  @staticmethod
-  def strip_ipv6_brackets(ip):
-    if ip is None:
-      return unicode('')
-    if len(ip) < 2:
-      return unicode(ip)
-    if ip[0] == '[' and ip[-1] == ']':
-      return unicode(ip[1:-1])
-    return unicode(ip)
-
-  # are ip_a and ip_b in the same netblock?
-  # mask_bits is the size of the netblock
-  # takes both IPv4 and IPv6 addresses
-  # the versions of ip_a and ip_b must be the same
-  # the mask must be valid for the IP version
-  @staticmethod
-  def netblocks_equal(ip_a, ip_b, mask_bits):
-    if ip_a is None or ip_b is None:
-      return False
-    ip_a = Candidate.strip_ipv6_brackets(ip_a)
-    ip_b = Candidate.strip_ipv6_brackets(ip_b)
-    a = ipaddress.ip_address(ip_a)
-    b = ipaddress.ip_address(ip_b)
-    if a.version != b.version:
-      raise Exception('Mismatching IP versions in %s and %s'%(ip_a, ip_b))
-    if mask_bits > a.max_prefixlen:
-      logging.error('Bad IP mask %d for %s and %s'%(mask_bits, ip_a, ip_b))
-      mask_bits = a.max_prefixlen
-    if mask_bits < 0:
-      logging.error('Bad IP mask %d for %s and %s'%(mask_bits, ip_a, ip_b))
-      mask_bits = 0
-    a_net = ipaddress.ip_network('%s/%d'%(ip_a, mask_bits), strict=False)
-    return b in a_net
-
-  # is this fallback's IPv4 address (dirip) in the same netblock as other's
-  # IPv4 address?
-  # mask_bits is the size of the netblock
-  def ipv4_netblocks_equal(self, other, mask_bits):
-    return Candidate.netblocks_equal(self.dirip, other.dirip, mask_bits)
-
-  # is this fallback's IPv6 address (ipv6addr) in the same netblock as
-  # other's IPv6 address?
-  # Returns False if either fallback has no IPv6 address
-  # mask_bits is the size of the netblock
-  def ipv6_netblocks_equal(self, other, mask_bits):
-    if not self.has_ipv6() or not other.has_ipv6():
-      return False
-    return Candidate.netblocks_equal(self.ipv6addr, other.ipv6addr, mask_bits)
-
-  # is this fallback's IPv4 DirPort the same as other's IPv4 DirPort?
-  def dirport_equal(self, other):
-    return self.dirport == other.dirport
-
-  # is this fallback's IPv4 ORPort the same as other's IPv4 ORPort?
-  def ipv4_orport_equal(self, other):
-    return self.orport == other.orport
-
-  # is this fallback's IPv6 ORPort the same as other's IPv6 ORPort?
-  # Returns False if either fallback has no IPv6 address
-  def ipv6_orport_equal(self, other):
-    if not self.has_ipv6() or not other.has_ipv6():
-      return False
-    return self.ipv6orport == other.ipv6orport
-
-  # does this fallback have the same DirPort, IPv4 ORPort, or
-  # IPv6 ORPort as other?
-  # Ignores IPv6 ORPort if either fallback has no IPv6 address
-  def port_equal(self, other):
-    return (self.dirport_equal(other) or self.ipv4_orport_equal(other)
-            or self.ipv6_orport_equal(other))
-
-  # return a list containing IPv4 ORPort, DirPort, and IPv6 ORPort (if present)
-  def port_list(self):
-    ports = [self.dirport, self.orport]
-    if self.has_ipv6() and not self.ipv6orport in ports:
-      ports.append(self.ipv6orport)
-    return ports
-
-  # does this fallback share a port with other, regardless of whether the
-  # port types match?
-  # For example, if self's IPv4 ORPort is 80 and other's DirPort is 80,
-  # return True
-  def port_shared(self, other):
-    for p in self.port_list():
-      if p in other.port_list():
-        return True
-    return False
-
-  # log how long it takes to download a consensus from dirip:dirport
-  # returns True if the download failed, False if it succeeded within max_time
-  @staticmethod
-  def fallback_consensus_download_speed(dirip, dirport, nickname, fingerprint,
-                                        max_time):
-    download_failed = False
-    # some directory mirrors respond to requests in ways that hang python
-    # sockets, which is why we log this line here
-    logging.info('Initiating %sconsensus download from %s (%s:%d) %s.',
-                 'microdesc ' if DOWNLOAD_MICRODESC_CONSENSUS else '',
-                 nickname, dirip, dirport, fingerprint)
-    # there appears to be about 1 second of overhead when comparing stem's
-    # internal trace time and the elapsed time calculated here
-    TIMEOUT_SLOP = 1.0
-    start = datetime.datetime.utcnow()
-    try:
-      consensus = get_consensus(
-                              endpoints = [(dirip, dirport)],
-                              timeout = (max_time + TIMEOUT_SLOP),
-                              validate = True,
-                              retries = 0,
-                              fall_back_to_authority = False,
-                              document_handler = DocumentHandler.BARE_DOCUMENT,
-                              microdescriptor = DOWNLOAD_MICRODESC_CONSENSUS
-                                ).run()[0]
-      end = datetime.datetime.utcnow()
-      time_since_expiry = (end - consensus.valid_until).total_seconds()
-      time_until_valid = (consensus.valid_after - end).total_seconds()
-    except Exception, stem_error:
-      end = datetime.datetime.utcnow()
-      log_excluded('Unable to retrieve a consensus from %s: %s', nickname,
-                    stem_error)
-      status = 'error: "%s"' % (stem_error)
-      level = logging.WARNING
-      download_failed = True
-    elapsed = (end - start).total_seconds()
-    if download_failed:
-      # keep the error failure status, and avoid using the variables
-      pass
-    elif elapsed > max_time:
-      status = 'too slow'
-      level = logging.WARNING
-      download_failed = True
-    elif (time_since_expiry > 0):
-      status = 'outdated consensus, expired %ds ago'%(int(time_since_expiry))
-      if time_since_expiry <= REASONABLY_LIVE_TIME:
-        status += ', tolerating up to %ds'%(REASONABLY_LIVE_TIME)
-        level = logging.INFO
-      else:
-        status += ', invalid'
-        level = logging.WARNING
-        download_failed = True
-    elif (time_until_valid > 0):
-      status = 'future consensus, valid in %ds'%(int(time_until_valid))
-      if time_until_valid <= REASONABLY_LIVE_TIME:
-        status += ', tolerating up to %ds'%(REASONABLY_LIVE_TIME)
-        level = logging.INFO
-      else:
-        status += ', invalid'
-        level = logging.WARNING
-        download_failed = True
-    else:
-      status = 'ok'
-      level = logging.DEBUG
-    logging.log(level, 'Consensus download: %0.1fs %s from %s (%s:%d) %s, ' +
-                 'max download time %0.1fs.', elapsed, status, nickname,
-                 dirip, dirport, fingerprint, max_time)
-    return download_failed
-
-  # does this fallback download the consensus fast enough?
-  def check_fallback_download_consensus(self):
-    # include the relay if we're not doing a check, or we can't check (IPv6)
-    ipv4_failed = False
-    ipv6_failed = False
-    if PERFORM_IPV4_DIRPORT_CHECKS:
-      ipv4_failed = Candidate.fallback_consensus_download_speed(self.dirip,
-                                                self.dirport,
-                                                self._data['nickname'],
-                                                self._fpr,
-                                                CONSENSUS_DOWNLOAD_SPEED_MAX)
-    if self.has_ipv6() and PERFORM_IPV6_DIRPORT_CHECKS:
-      # Clients assume the IPv6 DirPort is the same as the IPv4 DirPort
-      ipv6_failed = Candidate.fallback_consensus_download_speed(self.ipv6addr,
-                                                self.dirport,
-                                                self._data['nickname'],
-                                                self._fpr,
-                                                CONSENSUS_DOWNLOAD_SPEED_MAX)
-    return ((not ipv4_failed) and (not ipv6_failed))
-
-  # if this fallback has not passed a download check, try it again,
-  # and record the result, available in get_fallback_download_consensus
-  def try_fallback_download_consensus(self):
-    if not self.get_fallback_download_consensus():
-      self._data['download_check'] = self.check_fallback_download_consensus()
-
-  # did this fallback pass the download check?
-  def get_fallback_download_consensus(self):
-    # if we're not performing checks, return True
-    if not PERFORM_IPV4_DIRPORT_CHECKS and not PERFORM_IPV6_DIRPORT_CHECKS:
-      return True
-    # if we are performing checks, but haven't done one, return False
-    if not self._data.has_key('download_check'):
-      return False
-    return self._data['download_check']
-
-  # output an optional header comment and info for this fallback
-  # try_fallback_download_consensus before calling this
-  def fallbackdir_line(self, fallbacks, prefilter_fallbacks):
-    s = ''
-    if OUTPUT_COMMENTS:
-      s += self.fallbackdir_comment(fallbacks, prefilter_fallbacks)
-    # if the download speed is ok, output a C string
-    # if it's not, but we OUTPUT_COMMENTS, output a commented-out C string
-    if self.get_fallback_download_consensus() or OUTPUT_COMMENTS:
-      s += self.fallbackdir_info(self.get_fallback_download_consensus())
-    return s
-
-  # output a header comment for this fallback
-  def fallbackdir_comment(self, fallbacks, prefilter_fallbacks):
-    # /*
-    # nickname
-    # flags
-    # adjusted bandwidth, consensus weight
-    # [contact]
-    # [identical contact counts]
-    # */
-    # Multiline C comment
-    s = '/*'
-    s += '\n'
-    s += cleanse_c_multiline_comment(self._data['nickname'])
-    s += '\n'
-    s += 'Flags: '
-    s += cleanse_c_multiline_comment(' '.join(sorted(self._data['flags'])))
-    s += '\n'
-    # this is an adjusted bandwidth, see calculate_measured_bandwidth()
-    bandwidth = self._data['measured_bandwidth']
-    weight = self._data['consensus_weight']
-    s += 'Bandwidth: %.1f MByte/s, Consensus Weight: %d'%(
-        bandwidth/(1024.0*1024.0),
-        weight)
-    s += '\n'
-    if self._data['contact'] is not None:
-      s += cleanse_c_multiline_comment(self._data['contact'])
-      if CONTACT_COUNT:
-        fallback_count = len([f for f in fallbacks
-                              if f._data['contact'] == self._data['contact']])
-        if fallback_count > 1:
-          s += '\n'
-          s += '%d identical contacts listed' % (fallback_count)
-
-  # output the fallback info C string for this fallback
-  # this is the text that would go after FallbackDir in a torrc
-  # if this relay failed the download test and we OUTPUT_COMMENTS,
-  # comment-out the returned string
-  def fallbackdir_info(self, dl_speed_ok):
-    # "address:dirport orport=port id=fingerprint"
-    # (insert additional madatory fields here)
-    # "[ipv6=addr:orport]"
-    # (insert additional optional fields here)
-    # /* nickname=name */
-    # /* extrainfo={0,1} */
-    # (insert additional comment fields here)
-    # /* ===== */
-    # ,
-    #
-    # Do we want a C string, or a commented-out string?
-    c_string = dl_speed_ok
-    comment_string = not dl_speed_ok and OUTPUT_COMMENTS
-    # If we don't want either kind of string, bail
-    if not c_string and not comment_string:
-      return ''
-    s = ''
-    # Comment out the fallback directory entry if it's too slow
-    # See the debug output for which address and port is failing
-    if comment_string:
-      s += '/* Consensus download failed or was too slow:\n'
-    # Multi-Line C string with trailing comma (part of a string list)
-    # This makes it easier to diff the file, and remove IPv6 lines using grep
-    # Integers don't need escaping
-    s += '"%s orport=%d id=%s"'%(
-            cleanse_c_string(self._data['dir_address']),
-            self.orport,
-            cleanse_c_string(self._fpr))
-    s += '\n'
-    # (insert additional madatory fields here)
-    if self.has_ipv6():
-      s += '" ipv6=%s:%d"'%(cleanse_c_string(self.ipv6addr), self.ipv6orport)
-      s += '\n'
-    # (insert additional optional fields here)
-    if not comment_string:
-      s += '/* '
-    s += 'nickname=%s'%(cleanse_c_string(self._data['nickname']))
-    if not comment_string:
-      s += ' */'
-    s += '\n'
-    # if we know that the fallback is an extrainfo cache, flag it
-    # and if we don't know, assume it is not
-    if not comment_string:
-      s += '/* '
-    s += 'extrainfo=%d'%(1 if self._extra_info_cache else 0)
-    if not comment_string:
-      s += ' */'
-    s += '\n'
-    # (insert additional comment fields here)
-    # The terminator and comma must be the last line in each fallback entry
-    if not comment_string:
-      s += '/* '
-    s += SECTION_SEPARATOR_BASE
-    if not comment_string:
-      s += ' */'
-    s += '\n'
-    s += ','
-    if comment_string:
-      s += '\n'
-      s += '*/'
-    return s
-
-## Fallback Candidate List Class
-
-class CandidateList(dict):
-  def __init__(self):
-    pass
-
-  def _add_relay(self, details):
-    if not 'dir_address' in details: return
-    c = Candidate(details)
-    self[ c.get_fingerprint() ] = c
-
-  def _add_uptime(self, uptime):
-    try:
-      fpr = uptime['fingerprint']
-    except KeyError:
-      raise Exception("Document has no fingerprint field.")
-
-    try:
-      c = self[fpr]
-    except KeyError:
-      logging.debug('Got unknown relay %s in uptime document.'%(fpr,))
-      return
-
-    c.add_uptime(uptime)
-
-  def _add_details(self):
-    logging.debug('Loading details document.')
-    d = fetch('details',
-        fields=('fingerprint,nickname,contact,last_changed_address_or_port,' +
-                'consensus_weight,advertised_bandwidth,or_addresses,' +
-                'dir_address,recommended_version,flags,effective_family,' +
-                'platform'))
-    logging.debug('Loading details document done.')
-
-    if not 'relays' in d: raise Exception("No relays found in document.")
-
-    for r in d['relays']: self._add_relay(r)
-
-  def _add_uptimes(self):
-    logging.debug('Loading uptime document.')
-    d = fetch('uptime')
-    logging.debug('Loading uptime document done.')
-
-    if not 'relays' in d: raise Exception("No relays found in document.")
-    for r in d['relays']: self._add_uptime(r)
-
-  def add_relays(self):
-    self._add_details()
-    self._add_uptimes()
-
-  def count_guards(self):
-    guard_count = 0
-    for fpr in self.keys():
-      if self[fpr].is_guard():
-        guard_count += 1
-    return guard_count
-
-  # Find fallbacks that fit the uptime, stability, and flags criteria,
-  # and make an array of them in self.fallbacks
-  def compute_fallbacks(self):
-    self.fallbacks = map(lambda x: self[x],
-                         filter(lambda x: self[x].is_candidate(),
-                                self.keys()))
-
-  # sort fallbacks by their consensus weight to advertised bandwidth factor,
-  # lowest to highest
-  # used to find the median cw_to_bw_factor()
-  def sort_fallbacks_by_cw_to_bw_factor(self):
-    self.fallbacks.sort(key=lambda f: f.cw_to_bw_factor())
-
-  # sort fallbacks by their measured bandwidth, highest to lowest
-  # calculate_measured_bandwidth before calling this
-  # this is useful for reviewing candidates in priority order
-  def sort_fallbacks_by_measured_bandwidth(self):
-    self.fallbacks.sort(key=lambda f: f._data['measured_bandwidth'],
-                        reverse=True)
-
-  # sort fallbacks by the data field data_field, lowest to highest
-  def sort_fallbacks_by(self, data_field):
-    self.fallbacks.sort(key=lambda f: f._data[data_field])
-
-  @staticmethod
-  def load_relaylist(file_obj):
-    """ Read each line in the file, and parse it like a FallbackDir line:
-        an IPv4 address and optional port:
-          <IPv4 address>:<port>
-        which are parsed into dictionary entries:
-          ipv4=<IPv4 address>
-          dirport=<port>
-        followed by a series of key=value entries:
-          orport=<port>
-          id=<fingerprint>
-          ipv6=<IPv6 address>:<IPv6 orport>
-        each line's key/value pairs are placed in a dictonary,
-        (of string -> string key/value pairs),
-        and these dictionaries are placed in an array.
-        comments start with # and are ignored. """
-    file_data = file_obj['data']
-    file_name = file_obj['name']
-    relaylist = []
-    if file_data is None:
-      return relaylist
-    for line in file_data.split('\n'):
-      relay_entry = {}
-      # ignore comments
-      line_comment_split = line.split('#')
-      line = line_comment_split[0]
-      # cleanup whitespace
-      line = cleanse_whitespace(line)
-      line = line.strip()
-      if len(line) == 0:
-        continue
-      for item in line.split(' '):
-        item = item.strip()
-        if len(item) == 0:
-          continue
-        key_value_split = item.split('=')
-        kvl = len(key_value_split)
-        if kvl < 1 or kvl > 2:
-          print '#error Bad %s item: %s, format is key=value.'%(
-                                                 file_name, item)
-        if kvl == 1:
-          # assume that entries without a key are the ipv4 address,
-          # perhaps with a dirport
-          ipv4_maybe_dirport = key_value_split[0]
-          ipv4_maybe_dirport_split = ipv4_maybe_dirport.split(':')
-          dirl = len(ipv4_maybe_dirport_split)
-          if dirl < 1 or dirl > 2:
-            print '#error Bad %s IPv4 item: %s, format is ipv4:port.'%(
-                                                        file_name, item)
-          if dirl >= 1:
-            relay_entry['ipv4'] = ipv4_maybe_dirport_split[0]
-          if dirl == 2:
-            relay_entry['dirport'] = ipv4_maybe_dirport_split[1]
-        elif kvl == 2:
-          relay_entry[key_value_split[0]] = key_value_split[1]
-          # split ipv6 addresses and orports
-          if key_value_split[0] == 'ipv6':
-            ipv6_orport_split = key_value_split[1].rsplit(':', 1)
-            ipv6l = len(ipv6_orport_split)
-            if ipv6l != 2:
-              print '#error Bad %s IPv6 item: %s, format is [ipv6]:orport.'%(
-                                                          file_name, item)
-            relay_entry['ipv6_addr'] = ipv6_orport_split[0]
-            relay_entry['ipv6_orport'] = ipv6_orport_split[1]
-      relaylist.append(relay_entry)
-    return relaylist
-
-  def apply_filter_lists(self, whitelist_obj, exact=False):
-    """ Apply the fallback whitelist_obj to this fallback list,
-        passing exact to is_in_whitelist(). """
-    excluded_count = 0
-    list_type = 'whitelist'
-    if whitelist_obj['check_existing']:
-        list_type = 'fallback list'
-
-    logging.debug('Applying {}'.format(list_type))
-    # parse the whitelist
-    whitelist = self.load_relaylist(whitelist_obj)
-    filtered_fallbacks = []
-    for f in self.fallbacks:
-      in_whitelist = f.is_in_whitelist(whitelist, exact=exact)
-      if in_whitelist:
-        # include
-        filtered_fallbacks.append(f)
-      elif INCLUDE_UNLISTED_ENTRIES:
-          # include
-          filtered_fallbacks.append(f)
-      else:
-          # exclude
-          excluded_count += 1
-          log_excluded('Excluding %s: not in %s.',
-                       f._fpr, list_type)
-    self.fallbacks = filtered_fallbacks
-    return excluded_count
-
-  @staticmethod
-  def summarise_filters(initial_count, excluded_count, check_existing):
-    list_type = 'Whitelist'
-    if check_existing:
-        list_type = 'Fallback list'
-
-    return '/* %s excluded %d of %d candidates. */'%(list_type,
-                                                excluded_count, initial_count)
-
-  # calculate each fallback's measured bandwidth based on the median
-  # consensus weight to advertised bandwidth ratio
-  def calculate_measured_bandwidth(self):
-    self.sort_fallbacks_by_cw_to_bw_factor()
-    median_fallback = self.fallback_median(True)
-    if median_fallback is not None:
-      median_cw_to_bw_factor = median_fallback.cw_to_bw_factor()
-    else:
-      # this will never be used, because there are no fallbacks
-      median_cw_to_bw_factor = None
-    for f in self.fallbacks:
-      f.set_measured_bandwidth(median_cw_to_bw_factor)
-
-  # remove relays with low measured bandwidth from the fallback list
-  # calculate_measured_bandwidth for each relay before calling this
-  def remove_low_bandwidth_relays(self):
-    if MIN_BANDWIDTH is None:
-      return
-    above_min_bw_fallbacks = []
-    for f in self.fallbacks:
-      if f._data['measured_bandwidth'] >= MIN_BANDWIDTH:
-        above_min_bw_fallbacks.append(f)
-      else:
-        # the bandwidth we log here is limited by the relay's consensus weight
-        # as well as its adverttised bandwidth. See set_measured_bandwidth
-        # for details
-        log_excluded('%s not a candidate: bandwidth %.1fMByte/s too low, ' +
-                     'must be at least %.1fMByte/s', f._fpr,
-                     f._data['measured_bandwidth']/(1024.0*1024.0),
-                     MIN_BANDWIDTH/(1024.0*1024.0))
-    self.fallbacks = above_min_bw_fallbacks
-
-  # the minimum fallback in the list
-  # call one of the sort_fallbacks_* functions before calling this
-  def fallback_min(self):
-    if len(self.fallbacks) > 0:
-      return self.fallbacks[-1]
-    else:
-      return None
-
-  # the median fallback in the list
-  # call one of the sort_fallbacks_* functions before calling this
-  def fallback_median(self, require_advertised_bandwidth):
-    # use the low-median when there are an evan number of fallbacks,
-    # for consistency with the bandwidth authorities
-    if len(self.fallbacks) > 0:
-      median_position = (len(self.fallbacks) - 1) / 2
-      if not require_advertised_bandwidth:
-        return self.fallbacks[median_position]
-      # if we need advertised_bandwidth but this relay doesn't have it,
-      # move to a fallback with greater consensus weight until we find one
-      while not self.fallbacks[median_position]._data['advertised_bandwidth']:
-        median_position += 1
-        if median_position >= len(self.fallbacks):
-          return None
-      return self.fallbacks[median_position]
-    else:
-      return None
-
-  # the maximum fallback in the list
-  # call one of the sort_fallbacks_* functions before calling this
-  def fallback_max(self):
-    if len(self.fallbacks) > 0:
-      return self.fallbacks[0]
-    else:
-      return None
-
-  # return a new bag suitable for storing attributes
-  @staticmethod
-  def attribute_new():
-    return dict()
-
-  # get the count of attribute in attribute_bag
-  # if attribute is None or the empty string, return 0
-  @staticmethod
-  def attribute_count(attribute, attribute_bag):
-    if attribute is None or attribute == '':
-      return 0
-    if attribute not in attribute_bag:
-      return 0
-    return attribute_bag[attribute]
-
-  # does attribute_bag contain more than max_count instances of attribute?
-  # if so, return False
-  # if not, return True
-  # if attribute is None or the empty string, or max_count is invalid,
-  # always return True
-  @staticmethod
-  def attribute_allow(attribute, attribute_bag, max_count=1):
-    if attribute is None or attribute == '' or max_count <= 0:
-      return True
-    elif CandidateList.attribute_count(attribute, attribute_bag) >= max_count:
-      return False
-    else:
-      return True
-
-  # add attribute to attribute_bag, incrementing the count if it is already
-  # present
-  # if attribute is None or the empty string, or count is invalid,
-  # do nothing
-  @staticmethod
-  def attribute_add(attribute, attribute_bag, count=1):
-    if attribute is None or attribute == '' or count <= 0:
-      pass
-    attribute_bag.setdefault(attribute, 0)
-    attribute_bag[attribute] += count
-
-  # make sure there are only MAX_FALLBACKS_PER_IP fallbacks per IPv4 address,
-  # and per IPv6 address
-  # there is only one IPv4 address on each fallback: the IPv4 DirPort address
-  # (we choose the IPv4 ORPort which is on the same IPv4 as the DirPort)
-  # there is at most one IPv6 address on each fallback: the IPv6 ORPort address
-  # we try to match the IPv4 ORPort, but will use any IPv6 address if needed
-  # (clients only use the IPv6 ORPort)
-  # if there is no IPv6 address, only the IPv4 address is checked
-  # return the number of candidates we excluded
-  def limit_fallbacks_same_ip(self):
-    ip_limit_fallbacks = []
-    ip_list = CandidateList.attribute_new()
-    for f in self.fallbacks:
-      if (CandidateList.attribute_allow(f.dirip, ip_list,
-                                        MAX_FALLBACKS_PER_IPV4)
-          and CandidateList.attribute_allow(f.ipv6addr, ip_list,
-                                            MAX_FALLBACKS_PER_IPV6)):
-        ip_limit_fallbacks.append(f)
-        CandidateList.attribute_add(f.dirip, ip_list)
-        if f.has_ipv6():
-          CandidateList.attribute_add(f.ipv6addr, ip_list)
-      elif not CandidateList.attribute_allow(f.dirip, ip_list,
-                                             MAX_FALLBACKS_PER_IPV4):
-        log_excluded('Eliminated %s: already have %d fallback(s) on IPv4 %s'
-                     %(f._fpr, CandidateList.attribute_count(f.dirip, ip_list),
-                       f.dirip))
-      elif (f.has_ipv6() and
-            not CandidateList.attribute_allow(f.ipv6addr, ip_list,
-                                              MAX_FALLBACKS_PER_IPV6)):
-        log_excluded('Eliminated %s: already have %d fallback(s) on IPv6 %s'
-                     %(f._fpr, CandidateList.attribute_count(f.ipv6addr,
-                                                             ip_list),
-                       f.ipv6addr))
-    original_count = len(self.fallbacks)
-    self.fallbacks = ip_limit_fallbacks
-    return original_count - len(self.fallbacks)
-
-  # make sure there are only MAX_FALLBACKS_PER_CONTACT fallbacks for each
-  # ContactInfo
-  # if there is no ContactInfo, allow the fallback
-  # this check can be gamed by providing no ContactInfo, or by setting the
-  # ContactInfo to match another fallback
-  # However, given the likelihood that relays with the same ContactInfo will
-  # go down at similar times, its usefulness outweighs the risk
-  def limit_fallbacks_same_contact(self):
-    contact_limit_fallbacks = []
-    contact_list = CandidateList.attribute_new()
-    for f in self.fallbacks:
-      if CandidateList.attribute_allow(f._data['contact'], contact_list,
-                                       MAX_FALLBACKS_PER_CONTACT):
-        contact_limit_fallbacks.append(f)
-        CandidateList.attribute_add(f._data['contact'], contact_list)
-      else:
-        log_excluded(
-          'Eliminated %s: already have %d fallback(s) on ContactInfo %s'
-          %(f._fpr, CandidateList.attribute_count(f._data['contact'],
-                                                  contact_list),
-            f._data['contact']))
-    original_count = len(self.fallbacks)
-    self.fallbacks = contact_limit_fallbacks
-    return original_count - len(self.fallbacks)
-
-  # make sure there are only MAX_FALLBACKS_PER_FAMILY fallbacks per effective
-  # family
-  # if there is no family, allow the fallback
-  # we use effective family, which ensures mutual family declarations
-  # but the check can be gamed by not declaring a family at all
-  # if any indirect families exist, the result depends on the order in which
-  # fallbacks are sorted in the list
-  def limit_fallbacks_same_family(self):
-    family_limit_fallbacks = []
-    fingerprint_list = CandidateList.attribute_new()
-    for f in self.fallbacks:
-      if CandidateList.attribute_allow(f._fpr, fingerprint_list,
-                                       MAX_FALLBACKS_PER_FAMILY):
-        family_limit_fallbacks.append(f)
-        CandidateList.attribute_add(f._fpr, fingerprint_list)
-        for family_fingerprint in f._data['effective_family']:
-          CandidateList.attribute_add(family_fingerprint, fingerprint_list)
-      else:
-        # we already have a fallback with this fallback in its effective
-        # family
-        log_excluded(
-          'Eliminated %s: already have %d fallback(s) in effective family'
-          %(f._fpr, CandidateList.attribute_count(f._fpr, fingerprint_list)))
-    original_count = len(self.fallbacks)
-    self.fallbacks = family_limit_fallbacks
-    return original_count - len(self.fallbacks)
-
-  # try once to get the descriptors for fingerprint_list using stem
-  # returns an empty list on exception
-  @staticmethod
-  def get_fallback_descriptors_once(fingerprint_list):
-    desc_list = get_server_descriptors(fingerprints=fingerprint_list).run(suppress=True)
-    return desc_list
-
-  # try up to max_retries times to get the descriptors for fingerprint_list
-  # using stem. Stops retrying when all descriptors have been retrieved.
-  # returns a list containing the descriptors that were retrieved
-  @staticmethod
-  def get_fallback_descriptors(fingerprint_list, max_retries=5):
-    # we can't use stem's retries=, because we want to support more than 96
-    # descriptors
-    #
-    # add an attempt for every MAX_FINGERPRINTS (or part thereof) in the list
-    max_retries += (len(fingerprint_list) + MAX_FINGERPRINTS - 1) / MAX_FINGERPRINTS
-    remaining_list = fingerprint_list
-    desc_list = []
-    for _ in xrange(max_retries):
-      if len(remaining_list) == 0:
-        break
-      new_desc_list = CandidateList.get_fallback_descriptors_once(remaining_list[0:MAX_FINGERPRINTS])
-      for d in new_desc_list:
-        try:
-          remaining_list.remove(d.fingerprint)
-        except ValueError:
-          # warn and ignore if a directory mirror returned a bad descriptor
-          logging.warning("Directory mirror returned unwanted descriptor %s, ignoring",
-                          d.fingerprint)
-          continue
-        desc_list.append(d)
-    return desc_list
-
-  # find the fallbacks that cache extra-info documents
-  # Onionoo doesn't know this, so we have to use stem
-  def mark_extra_info_caches(self):
-    fingerprint_list = [ f._fpr for f in self.fallbacks ]
-    logging.info("Downloading fallback descriptors to find extra-info caches")
-    desc_list = CandidateList.get_fallback_descriptors(fingerprint_list)
-    for d in desc_list:
-      self[d.fingerprint]._extra_info_cache = d.extra_info_cache
-    missing_descriptor_list = [ f._fpr for f in self.fallbacks
-                                if f._extra_info_cache is None ]
-    for f in missing_descriptor_list:
-      logging.warning("No descriptor for {}. Assuming extrainfo=0.".format(f))
-
-  # try a download check on each fallback candidate in order
-  # stop after max_count successful downloads
-  # but don't remove any candidates from the array
-  def try_download_consensus_checks(self, max_count):
-    dl_ok_count = 0
-    for f in self.fallbacks:
-      f.try_fallback_download_consensus()
-      if f.get_fallback_download_consensus():
-        # this fallback downloaded a consensus ok
-        dl_ok_count += 1
-        if dl_ok_count >= max_count:
-          # we have enough fallbacks
-          return
-
-  # put max_count successful candidates in the fallbacks array:
-  # - perform download checks on each fallback candidate
-  # - retry failed candidates if CONSENSUS_DOWNLOAD_RETRY is set
-  # - eliminate failed candidates
-  # - if there are more than max_count candidates, eliminate lowest bandwidth
-  # - if there are fewer than max_count candidates, leave only successful
-  # Return the number of fallbacks that failed the consensus check
-  def perform_download_consensus_checks(self, max_count):
-    self.sort_fallbacks_by_measured_bandwidth()
-    self.try_download_consensus_checks(max_count)
-    if CONSENSUS_DOWNLOAD_RETRY:
-      # try unsuccessful candidates again
-      # we could end up with more than max_count successful candidates here
-      self.try_download_consensus_checks(max_count)
-    # now we have at least max_count successful candidates,
-    # or we've tried them all
-    original_count = len(self.fallbacks)
-    self.fallbacks = filter(lambda x: x.get_fallback_download_consensus(),
-                            self.fallbacks)
-    # some of these failed the check, others skipped the check,
-    # if we already had enough successful downloads
-    failed_count = original_count - len(self.fallbacks)
-    self.fallbacks = self.fallbacks[:max_count]
-    return failed_count
-
-  # return a string that describes a/b as a percentage
-  @staticmethod
-  def describe_percentage(a, b):
-    if b != 0:
-      return '%d/%d = %.0f%%'%(a, b, (a*100.0)/b)
-    else:
-      # technically, 0/0 is undefined, but 0.0% is a sensible result
-      return '%d/%d = %.0f%%'%(a, b, 0.0)
-
-  # return a dictionary of lists of fallbacks by IPv4 netblock
-  # the dictionary is keyed by the fingerprint of an arbitrary fallback
-  # in each netblock
-  # mask_bits is the size of the netblock
-  def fallbacks_by_ipv4_netblock(self, mask_bits):
-    netblocks = {}
-    for f in self.fallbacks:
-      found_netblock = False
-      for b in netblocks.keys():
-        # we found an existing netblock containing this fallback
-        if f.ipv4_netblocks_equal(self[b], mask_bits):
-          # add it to the list
-          netblocks[b].append(f)
-          found_netblock = True
-          break
-      # make a new netblock based on this fallback's fingerprint
-      if not found_netblock:
-        netblocks[f._fpr] = [f]
-    return netblocks
-
-  # return a dictionary of lists of fallbacks by IPv6 netblock
-  # where mask_bits is the size of the netblock
-  def fallbacks_by_ipv6_netblock(self, mask_bits):
-    netblocks = {}
-    for f in self.fallbacks:
-      # skip fallbacks without IPv6 addresses
-      if not f.has_ipv6():
-        continue
-      found_netblock = False
-      for b in netblocks.keys():
-        # we found an existing netblock containing this fallback
-        if f.ipv6_netblocks_equal(self[b], mask_bits):
-          # add it to the list
-          netblocks[b].append(f)
-          found_netblock = True
-          break
-      # make a new netblock based on this fallback's fingerprint
-      if not found_netblock:
-        netblocks[f._fpr] = [f]
-    return netblocks
-
-  # log a message about the proportion of fallbacks in each IPv4 netblock,
-  # where mask_bits is the size of the netblock
-  def describe_fallback_ipv4_netblock_mask(self, mask_bits):
-    fallback_count = len(self.fallbacks)
-    shared_netblock_fallback_count = 0
-    most_frequent_netblock = None
-    netblocks = self.fallbacks_by_ipv4_netblock(mask_bits)
-    for b in netblocks.keys():
-      if len(netblocks[b]) > 1:
-        # how many fallbacks are in a netblock with other fallbacks?
-        shared_netblock_fallback_count += len(netblocks[b])
-        # what's the netblock with the most fallbacks?
-        if (most_frequent_netblock is None
-            or len(netblocks[b]) > len(netblocks[most_frequent_netblock])):
-          most_frequent_netblock = b
-        logging.debug('Fallback IPv4 addresses in the same /%d:'%(mask_bits))
-        for f in netblocks[b]:
-          logging.debug('%s - %s', f.dirip, f._fpr)
-    if most_frequent_netblock is not None:
-      logging.warning('There are %s fallbacks in the IPv4 /%d containing %s'%(
-                                    CandidateList.describe_percentage(
-                                      len(netblocks[most_frequent_netblock]),
-                                      fallback_count),
-                                    mask_bits,
-                                    self[most_frequent_netblock].dirip))
-    if shared_netblock_fallback_count > 0:
-      logging.warning(('%s of fallbacks are in an IPv4 /%d with other ' +
-                       'fallbacks')%(CandidateList.describe_percentage(
-                                                shared_netblock_fallback_count,
-                                                fallback_count),
-                                     mask_bits))
-
-  # log a message about the proportion of fallbacks in each IPv6 netblock,
-  # where mask_bits is the size of the netblock
-  def describe_fallback_ipv6_netblock_mask(self, mask_bits):
-    fallback_count = len(self.fallbacks_with_ipv6())
-    shared_netblock_fallback_count = 0
-    most_frequent_netblock = None
-    netblocks = self.fallbacks_by_ipv6_netblock(mask_bits)
-    for b in netblocks.keys():
-      if len(netblocks[b]) > 1:
-        # how many fallbacks are in a netblock with other fallbacks?
-        shared_netblock_fallback_count += len(netblocks[b])
-        # what's the netblock with the most fallbacks?
-        if (most_frequent_netblock is None
-            or len(netblocks[b]) > len(netblocks[most_frequent_netblock])):
-          most_frequent_netblock = b
-        logging.debug('Fallback IPv6 addresses in the same /%d:'%(mask_bits))
-        for f in netblocks[b]:
-          logging.debug('%s - %s', f.ipv6addr, f._fpr)
-    if most_frequent_netblock is not None:
-      logging.warning('There are %s fallbacks in the IPv6 /%d containing %s'%(
-                                    CandidateList.describe_percentage(
-                                      len(netblocks[most_frequent_netblock]),
-                                      fallback_count),
-                                    mask_bits,
-                                    self[most_frequent_netblock].ipv6addr))
-    if shared_netblock_fallback_count > 0:
-      logging.warning(('%s of fallbacks are in an IPv6 /%d with other ' +
-                       'fallbacks')%(CandidateList.describe_percentage(
-                                                shared_netblock_fallback_count,
-                                                fallback_count),
-                                     mask_bits))
-
-  # log a message about the proportion of fallbacks in each IPv4 /8, /16,
-  # and /24
-  def describe_fallback_ipv4_netblocks(self):
-   # this doesn't actually tell us anything useful
-   #self.describe_fallback_ipv4_netblock_mask(8)
-   self.describe_fallback_ipv4_netblock_mask(16)
-   #self.describe_fallback_ipv4_netblock_mask(24)
-
-  # log a message about the proportion of fallbacks in each IPv6 /12 (RIR),
-  # /23 (smaller RIR blocks), /32 (LIR), /48 (Customer), and /64 (Host)
-  # https://www.iana.org/assignments/ipv6-unicast-address-assignments/
-  def describe_fallback_ipv6_netblocks(self):
-    # these don't actually tell us anything useful
-    #self.describe_fallback_ipv6_netblock_mask(12)
-    #self.describe_fallback_ipv6_netblock_mask(23)
-    self.describe_fallback_ipv6_netblock_mask(32)
-    #self.describe_fallback_ipv6_netblock_mask(48)
-    self.describe_fallback_ipv6_netblock_mask(64)
-
-  # log a message about the proportion of fallbacks in each IPv4 and IPv6
-  # netblock
-  def describe_fallback_netblocks(self):
-    self.describe_fallback_ipv4_netblocks()
-    self.describe_fallback_ipv6_netblocks()
-
-  # return a list of fallbacks which are on the IPv4 ORPort port
-  def fallbacks_on_ipv4_orport(self, port):
-    return filter(lambda x: x.orport == port, self.fallbacks)
-
-  # return a list of fallbacks which are on the IPv6 ORPort port
-  def fallbacks_on_ipv6_orport(self, port):
-    return filter(lambda x: x.ipv6orport == port, self.fallbacks_with_ipv6())
-
-  # return a list of fallbacks which are on the DirPort port
-  def fallbacks_on_dirport(self, port):
-    return filter(lambda x: x.dirport == port, self.fallbacks)
-
-  # log a message about the proportion of fallbacks on IPv4 ORPort port
-  # and return that count
-  def describe_fallback_ipv4_orport(self, port):
-    port_count = len(self.fallbacks_on_ipv4_orport(port))
-    fallback_count = len(self.fallbacks)
-    logging.warning('%s of fallbacks are on IPv4 ORPort %d'%(
-                    CandidateList.describe_percentage(port_count,
-                                                      fallback_count),
-                    port))
-    return port_count
-
-  # log a message about the proportion of IPv6 fallbacks on IPv6 ORPort port
-  # and return that count
-  def describe_fallback_ipv6_orport(self, port):
-    port_count = len(self.fallbacks_on_ipv6_orport(port))
-    fallback_count = len(self.fallbacks_with_ipv6())
-    logging.warning('%s of IPv6 fallbacks are on IPv6 ORPort %d'%(
-                    CandidateList.describe_percentage(port_count,
-                                                      fallback_count),
-                    port))
-    return port_count
-
-  # log a message about the proportion of fallbacks on DirPort port
-  # and return that count
-  def describe_fallback_dirport(self, port):
-    port_count = len(self.fallbacks_on_dirport(port))
-    fallback_count = len(self.fallbacks)
-    logging.warning('%s of fallbacks are on DirPort %d'%(
-                    CandidateList.describe_percentage(port_count,
-                                                      fallback_count),
-                    port))
-    return port_count
-
-  # log a message about the proportion of fallbacks on each dirport,
-  # each IPv4 orport, and each IPv6 orport
-  def describe_fallback_ports(self):
-    fallback_count = len(self.fallbacks)
-    ipv4_or_count = fallback_count
-    ipv4_or_count -= self.describe_fallback_ipv4_orport(443)
-    ipv4_or_count -= self.describe_fallback_ipv4_orport(9001)
-    logging.warning('%s of fallbacks are on other IPv4 ORPorts'%(
-                    CandidateList.describe_percentage(ipv4_or_count,
-                                                      fallback_count)))
-    ipv6_fallback_count = len(self.fallbacks_with_ipv6())
-    ipv6_or_count = ipv6_fallback_count
-    ipv6_or_count -= self.describe_fallback_ipv6_orport(443)
-    ipv6_or_count -= self.describe_fallback_ipv6_orport(9001)
-    logging.warning('%s of IPv6 fallbacks are on other IPv6 ORPorts'%(
-                    CandidateList.describe_percentage(ipv6_or_count,
-                                                      ipv6_fallback_count)))
-    dir_count = fallback_count
-    dir_count -= self.describe_fallback_dirport(80)
-    dir_count -= self.describe_fallback_dirport(9030)
-    logging.warning('%s of fallbacks are on other DirPorts'%(
-                    CandidateList.describe_percentage(dir_count,
-                                                      fallback_count)))
-
-  # return a list of fallbacks which cache extra-info documents
-  def fallbacks_with_extra_info_cache(self):
-    return filter(lambda x: x._extra_info_cache, self.fallbacks)
-
-  # log a message about the proportion of fallbacks that cache extra-info docs
-  def describe_fallback_extra_info_caches(self):
-    extra_info_falback_count = len(self.fallbacks_with_extra_info_cache())
-    fallback_count = len(self.fallbacks)
-    logging.warning('%s of fallbacks cache extra-info documents'%(
-                    CandidateList.describe_percentage(extra_info_falback_count,
-                                                      fallback_count)))
-
-  # return a list of fallbacks which have the Exit flag
-  def fallbacks_with_exit(self):
-    return filter(lambda x: x.is_exit(), self.fallbacks)
-
-  # log a message about the proportion of fallbacks with an Exit flag
-  def describe_fallback_exit_flag(self):
-    exit_falback_count = len(self.fallbacks_with_exit())
-    fallback_count = len(self.fallbacks)
-    logging.warning('%s of fallbacks have the Exit flag'%(
-                    CandidateList.describe_percentage(exit_falback_count,
-                                                      fallback_count)))
-
-  # return a list of fallbacks which have an IPv6 address
-  def fallbacks_with_ipv6(self):
-    return filter(lambda x: x.has_ipv6(), self.fallbacks)
-
-  # log a message about the proportion of fallbacks on IPv6
-  def describe_fallback_ip_family(self):
-    ipv6_falback_count = len(self.fallbacks_with_ipv6())
-    fallback_count = len(self.fallbacks)
-    logging.warning('%s of fallbacks are on IPv6'%(
-                    CandidateList.describe_percentage(ipv6_falback_count,
-                                                      fallback_count)))
-
-  def summarise_fallbacks(self, eligible_count, operator_count, failed_count,
-                          guard_count, target_count, check_existing):
-    s = ''
-    # Report:
-    #  whether we checked consensus download times
-    #  the number of fallback directories (and limits/exclusions, if relevant)
-    #  min & max fallback bandwidths
-    #  #error if below minimum count
-    if PERFORM_IPV4_DIRPORT_CHECKS or PERFORM_IPV6_DIRPORT_CHECKS:
-      s += '/* Checked %s%s%s DirPorts served a consensus within %.1fs. */'%(
-            'IPv4' if PERFORM_IPV4_DIRPORT_CHECKS else '',
-            ' and ' if (PERFORM_IPV4_DIRPORT_CHECKS
-                        and PERFORM_IPV6_DIRPORT_CHECKS) else '',
-            'IPv6' if PERFORM_IPV6_DIRPORT_CHECKS else '',
-            CONSENSUS_DOWNLOAD_SPEED_MAX)
-    else:
-      s += '/* Did not check IPv4 or IPv6 DirPort consensus downloads. */'
-    s += '\n'
-    # Multiline C comment with #error if things go bad
-    s += '/*'
-    s += '\n'
-    # Integers don't need escaping in C comments
-    fallback_count = len(self.fallbacks)
-    if FALLBACK_PROPORTION_OF_GUARDS is None:
-      fallback_proportion = ''
-    else:
-      fallback_proportion = ', Target %d (%d * %.2f)'%(target_count,
-                                                guard_count,
-                                                FALLBACK_PROPORTION_OF_GUARDS)
-    s += 'Final Count: %d (Eligible %d%s'%(fallback_count, eligible_count,
-                                           fallback_proportion)
-    if MAX_FALLBACK_COUNT is not None:
-      s += ', Max %d'%(MAX_FALLBACK_COUNT)
-    s += ')\n'
-    if eligible_count != fallback_count:
-      removed_count = eligible_count - fallback_count
-      excess_to_target_or_max = (eligible_count - operator_count - failed_count
-                                 - fallback_count)
-      # some 'Failed' failed the check, others 'Skipped' the check,
-      # if we already had enough successful downloads
-      s += ('Excluded: %d (Same Operator %d, Failed/Skipped Download %d, ' +
-            'Excess %d)')%(removed_count, operator_count, failed_count,
-                           excess_to_target_or_max)
-      s += '\n'
-    min_fb = self.fallback_min()
-    min_bw = min_fb._data['measured_bandwidth']
-    max_fb = self.fallback_max()
-    max_bw = max_fb._data['measured_bandwidth']
-    s += 'Bandwidth Range: %.1f - %.1f MByte/s'%(min_bw/(1024.0*1024.0),
-                                                 max_bw/(1024.0*1024.0))
-    s += '\n'
-    s += '*/'
-    if fallback_count < MIN_FALLBACK_COUNT:
-      list_type = 'whitelist'
-      if check_existing:
-          list_type = 'fallback list'
-      # We must have a minimum number of fallbacks so they are always
-      # reachable, and are in diverse locations
-      s += '\n'
-      s += '#error Fallback Count %d is too low. '%(fallback_count)
-      s += 'Must be at least %d for diversity. '%(MIN_FALLBACK_COUNT)
-      s += 'Try adding entries to %s, '%(list_type)
-      s += 'or setting INCLUDE_UNLISTED_ENTRIES = True.'
-    return s
-
-def process_existing():
-  logging.basicConfig(level=logging.INFO)
-  logging.getLogger('stem').setLevel(logging.INFO)
-  whitelist = {'data': parse_fallback_file(FALLBACK_FILE_NAME),
-               'name': FALLBACK_FILE_NAME,
-               'check_existing' : True}
-  list_fallbacks(whitelist, exact=True)
-
-def process_default():
-  logging.basicConfig(level=logging.WARNING)
-  logging.getLogger('stem').setLevel(logging.WARNING)
-  whitelist = {'data': read_from_file(WHITELIST_FILE_NAME, MAX_LIST_FILE_SIZE),
-               'name': WHITELIST_FILE_NAME,
-               'check_existing': False}
-  list_fallbacks(whitelist, exact=False)
-
-## Main Function
-def main():
-  if get_command() == 'check_existing':
-    process_existing()
-  else:
-    process_default()
-
-def get_command():
-  if len(sys.argv) == 2:
-    return sys.argv[1]
-  else:
-    return None
-
-def log_excluded(msg, *args):
-  if get_command() == 'check_existing':
-    logging.warning(msg, *args)
-  else:
-    logging.info(msg, *args)
-
-def list_fallbacks(whitelist, exact=False):
-  """ Fetches required onionoo documents and evaluates the
-      fallback directory criteria for each of the relays,
-      passing exact to apply_filter_lists(). """
-  if whitelist['check_existing']:
-      print "/* type=fallback */"
-  else:
-      print "/* type=whitelist */"
-
-  print ("/* version={} */"
-         .format(cleanse_c_multiline_comment(FALLBACK_FORMAT_VERSION)))
-  now = datetime.datetime.utcnow()
-  timestamp = now.strftime('%Y%m%d%H%M%S')
-  print ("/* timestamp={} */"
-         .format(cleanse_c_multiline_comment(timestamp)))
-  # end the header with a separator, to make it easier for parsers
-  print SECTION_SEPARATOR_COMMENT
-
-  logging.warning('Downloading and parsing Onionoo data. ' +
-                  'This may take some time.')
-  # find relays that could be fallbacks
-  candidates = CandidateList()
-  candidates.add_relays()
-
-  # work out how many fallbacks we want
-  guard_count = candidates.count_guards()
-  if FALLBACK_PROPORTION_OF_GUARDS is None:
-    target_count = guard_count
-  else:
-    target_count = int(guard_count * FALLBACK_PROPORTION_OF_GUARDS)
-  # the maximum number of fallbacks is the least of:
-  # - the target fallback count (FALLBACK_PROPORTION_OF_GUARDS * guard count)
-  # - the maximum fallback count (MAX_FALLBACK_COUNT)
-  if MAX_FALLBACK_COUNT is None:
-    max_count = target_count
-  else:
-    max_count = min(target_count, MAX_FALLBACK_COUNT)
-
-  candidates.compute_fallbacks()
-  prefilter_fallbacks = copy.copy(candidates.fallbacks)
-
-  # filter with the whitelist
-  # if a relay has changed IPv4 address or ports recently, it will be excluded
-  # as ineligible before we call apply_filter_lists, and so there will be no
-  # warning that the details have changed from those in the whitelist.
-  # instead, there will be an info-level log during the eligibility check.
-  initial_count = len(candidates.fallbacks)
-  excluded_count = candidates.apply_filter_lists(whitelist, exact=exact)
-  print candidates.summarise_filters(initial_count, excluded_count,
-          whitelist['check_existing'])
-  eligible_count = len(candidates.fallbacks)
-
-  # calculate the measured bandwidth of each relay,
-  # then remove low-bandwidth relays
-  candidates.calculate_measured_bandwidth()
-  candidates.remove_low_bandwidth_relays()
-
-  # print the raw fallback list
-  #for x in candidates.fallbacks:
-  #  print x.fallbackdir_line(True)
-  #  print json.dumps(candidates[x]._data, sort_keys=True, indent=4,
-  #                   separators=(',', ': '), default=json_util.default)
-
-  # impose mandatory conditions here, like one per contact, family, IP
-  # in measured bandwidth order
-  candidates.sort_fallbacks_by_measured_bandwidth()
-  operator_count = 0
-  # only impose these limits on the final list - operators can nominate
-  # multiple candidate fallbacks, and then we choose the best set
-  if not OUTPUT_CANDIDATES:
-    operator_count += candidates.limit_fallbacks_same_ip()
-    operator_count += candidates.limit_fallbacks_same_contact()
-    operator_count += candidates.limit_fallbacks_same_family()
-
-  # check if each candidate can serve a consensus
-  # there's a small risk we've eliminated relays from the same operator that
-  # can serve a consensus, in favour of one that can't
-  # but given it takes up to 15 seconds to check each consensus download,
-  # the risk is worth it
-  if PERFORM_IPV4_DIRPORT_CHECKS or PERFORM_IPV6_DIRPORT_CHECKS:
-    logging.warning('Checking consensus download speeds. ' +
-                    'This may take some time.')
-  failed_count = candidates.perform_download_consensus_checks(max_count)
-
-  # work out which fallbacks cache extra-infos
-  candidates.mark_extra_info_caches()
-
-  # analyse and log interesting diversity metrics
-  # like netblock, ports, exit, IPv4-only
-  # (we can't easily analyse AS, and it's hard to accurately analyse country)
-  candidates.describe_fallback_ip_family()
-  # if we can't import the ipaddress module, we can't do netblock analysis
-  if HAVE_IPADDRESS:
-    candidates.describe_fallback_netblocks()
-  candidates.describe_fallback_ports()
-  candidates.describe_fallback_extra_info_caches()
-  candidates.describe_fallback_exit_flag()
-
-  # output C comments summarising the fallback selection process
-  if len(candidates.fallbacks) > 0:
-    print candidates.summarise_fallbacks(eligible_count, operator_count,
-                                         failed_count, guard_count,
-                                         target_count,
-                                         whitelist['check_existing'])
-  else:
-    print '/* No Fallbacks met criteria */'
-
-  # output C comments specifying the OnionOO data used to create the list
-  for s in fetch_source_list():
-    print describe_fetch_source(s)
-
-  # start the list with a separator, to make it easy for parsers
-  print SECTION_SEPARATOR_COMMENT
-
-  # sort the list differently depending on why we've created it:
-  # if we're outputting the final fallback list, sort by fingerprint
-  # this makes diffs much more stable
-  # otherwise, if we're trying to find a bandwidth cutoff, or we want to
-  # contact operators in priority order, sort by bandwidth (not yet
-  # implemented)
-  # otherwise, if we're contacting operators, sort by contact
-  candidates.sort_fallbacks_by(OUTPUT_SORT_FIELD)
-
-  for x in candidates.fallbacks:
-    print x.fallbackdir_line(candidates.fallbacks, prefilter_fallbacks)
-
-if __name__ == "__main__":
-  main()

_______________________________________________
tor-commits mailing list
tor-commits@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits