[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-dev] Two protocols to measure relay-sensitive hidden-service statistics
>> AnonStats1 doesnât leak the relay identity. The relay probability is sent over a separate circuit (at a random time). I intentionally did that just to avoid the problem you describe.
>>
>
> Ah, I see, that makes sense.
>
> Some more notes from reading AnonStats1 then:
>
> a) How do relays get more tokens when they deplete the initial 2k
> tokens? Is it easy for the StatAuth to generate 2k such tokens, or
> can relays DoS them by asking for tokens repeatedly?
New tokens are issued for each measurement period (e.g. every 24 hours). The relay should be limited to asking for its allotment once per period.
> b) It seems a bit weird to assume that all relay operators are good
> citizens, but still not trust the rest of the Internet at all
> (that's why we are doing the blind signature scheme, right?).
It doesnât seem that weird to me. Running a relay requires some level of effort.
> If an outside attacker wanted to influence the results, he could
> still sign up 10 relays on the network, get the blind signature
> tokens, and have them publish anonymized bad statistics, right?
Right.
> That's because the highest counts of both statistics will likely
> correspond to the HSDirs and IPs of the most popular hidden service of
> the network, if the most popular HS has a large user count difference
> from the least popular ones.
I am beginning to think that AnonStats2 is not secure enough to use. The consensus-weight bins were supposed to hide which relays exactly were reporting the statistics, but because the bins of HSDirs and IPs change over time, the adversary could watch the the HSDir/IP bins of a target HS to see if the stats tend to be larger or smaller over their average. This remains the case even if there is just one bin of relays that is allowed to report stats if that bin does not include all HSDirs/IPs that HSes might use.
Also, in AnonStats1 we maybe should require that counts are reported in constant-size chunks over separate circuits. For example, we could have every 100 unique HS descriptors sent in a different upload. This way, for example, a particularly large statistic wouldnât identify a particularly large HSDir/IP (then if this stat is larger than its normally-large value, that difference could reveal the popularity of a target popular HS).
Best,
Aaron
_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev