Aaron Johnson transcribed 2.1K bytes:
> > > That seems easy to fix. Make the number of Introduction Points the same
> > > as it was before and make them be selected in a bandwidth-weight
> > > way. There is no cost to this. You need IPs to be online, and so
> > > whatever number was used in the past will yield the same availability
> > > now. And bandwidth-weighting should actually improve both performance
> > > and security.
> >
> > Is it obvious how to build a bandwidth-weighted DHT that is stable
> > across changes in the consensus? One advantage of using the hash ring
> > is that the loss of an HSDir causes only local changes in the
> > topology, so if the client is using a different version of the
> > consensus they can still locate one of the responsible HSDirs. (Note:
> > I do not claim this cannot be done; it just seems like an important
> > detail to sort outâ)
>
> You could reserve a space after each relay in the hash ring with a length
> equal to the relay's bandwidth, and then assign an onion service with a hash
> that is a fraction f of the maximum possible hash value to the relay owning
> the space in which the fraction f of the total hash-ring length is
> located. Removing or adding a relay adjusts the onion service locations by
> an amount that is at most the fraction that is that relayâs total bandwidth
> fraction. To ensure coverage for clients with older consensuses, the relay
> can maintain HSDir+IPs at the locations indicated by both the current and
> previous consensus.
At one point, I wanted to do something like this for BridgeDB's hashrings, to
be able to increase the probability that Bridges with higher bandwidth would
be distributed (assuming we live in a glorious future where Bridges are
actually measured). The above design isn't the most time efficient, and it
also turned out to be *super* unfun to implement/debug. For HS reliability,
it could be a bit disastrous, depending on how much "shifting" happens between
consensuses, and (at least, for BridgeDB's case) my testing showed that even a
small differential meant that nearly the entire hashring would be in an
unusable state.
A better algorithm would be a Consistent Hashring, modified to dynamically
allocate replications in proportion to fraction of total bandwidth weight. As
with a normal Consistent Hashring, replications determine the number times the
relay is uniformly inserted into the hashring. The algorithm goes like this:
bw_total â Î BW, â CONSENSUS â DESC {BW: DESC â BW}
router â â
replications â â
key â â
while router â CONSENSUS:
| replications â FLOOR(CONSENSUS_WEIGHT_FRACTION(BW, bw_total) * T)
| while replications != 0:
| | key â HMAC(CONCATENATE(FPR, replications))[:X]
| | INSERT(key, router)
where:
* BW is the routers's `w bandwith=` weight line from the consensus,
* DESC is a descriptor in the CONSENSUS,
* CONSENSUS_WEIGHT_FRACTION is a function for computing a router's consensus
weight in relation to the summation of consensus weights (bw_total),
* T is some arbitrary number for translating a router's consensus weight
fraction into the number of replications,
* HMAC is a keyed hashing function,
* FPR is an hexadecimal string containing the hash of the router's public
identity key,
* X is some arbitrary number of bytes to truncate an HMAC to, and
* INSERT is an algorithm for inserting items into the hashring.
For routers A and B, where B has a little bit more bandwidth than A, this gets
you a hashring which looks like this:
B-ÂÂÂ`-BA
A,` `.
/ \
B| |B
\ /
`. ,ÂA
AB--__--ÂB
When B disappears, A remains in the same positions:
_-ÂÂÂ`-_A
A,` `.
/ \
| |
\ /
`. ,ÂA
A`--__--Â
And similarly if B disappears:
B-ÂÂÂ`-B
,` `.
/ \
B| |B
\ /
`. ,Â
B--__--ÂB
So no more "shifting" problem. It also makes recalculation of the hashring
when a new consensus arrives much more time efficient.
If you want to play with it, I've implemented it in Python for BridgeDB:
https://gitweb.torproject.org/user/isis/bridgedb.git/tree/bridgedb/hashring.py?id=949d33e8#n481
One tiny caveat being that the ConsistentHashring class doesn't dynamically
assign replication count by bandwidth weight (still waiting for that glorious
futureâ); it gets initialised with the number of replications. However,
nothing in the current implementation prevents you from doing:
>>> h = ConsistentHashring('SuperSecureKey', replications=6)
>>> h.insert(A)
>>> h.replications = 23
>>> h.insert(B)
>>> h.replications = 42
>>> h.insert(C)
Best Regards,
--
ââ isis agora lovecruft
_________________________________________________________
OpenPGP: 4096R/0A6A58A14B5946ABDE18E207A3ADB67A2CDB8B35
Current Keys: https://blog.patternsinthevoid.net/isis.txt
Attachment:
signature.asc
Description: Digital signature
_______________________________________________ tor-dev mailing list tor-dev@xxxxxxxxxxxxxxxxxxxx https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev