[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
[tor-commits] [research-web/master] publish the request and response for 2017-03
commit 0af0f5936711be37a31c56331bdaf66c313bf5ce
Author: Roger Dingledine <arma@xxxxxxxxxxxxxx>
Date: Wed Nov 1 02:16:08 2017 -0400
publish the request and response for 2017-03
---
htdocs/safetyboard.html | 16 ++++-
htdocs/trsb/2017-03-request.txt | 124 +++++++++++++++++++++++++++++++++++++++
htdocs/trsb/2017-03-response.txt | 118 +++++++++++++++++++++++++++++++++++++
3 files changed, 257 insertions(+), 1 deletion(-)
diff --git a/htdocs/safetyboard.html b/htdocs/safetyboard.html
index 1058533..7a53bed 100644
--- a/htdocs/safetyboard.html
+++ b/htdocs/safetyboard.html
@@ -198,10 +198,24 @@ public first.
<p>
2017-03: Running middle relays to measure onion service popularity
<ul>
-<li>[still anonymized during paper review]
+<li><a href="trsb/2017-03-request.txt">request</a>
+<li><a href="trsb/2017-03-response.txt">response</a>
<li><a href="https://onionpop.github.io/">research project page</a>
+<li>[Accepted paper forthcoming at NDSS 2018]
</ul>
+<p>
+2017-04: [under review]
+</p>
+
+<p>
+2017-05: [under review]
+</p>
+
+<p>
+2017-06: [under review]
+</p>
+
<hr>
<a id="who"></a>
<h3><a class="anchor" href="#who">Who is on the Board?</a></h3>
diff --git a/htdocs/trsb/2017-03-request.txt b/htdocs/trsb/2017-03-request.txt
new file mode 100644
index 0000000..1cafc20
--- /dev/null
+++ b/htdocs/trsb/2017-03-request.txt
@@ -0,0 +1,124 @@
+From: Rob Jansen <rob.g.jansen@xxxxxxxxxxxx>
+Subject: Request for feedback on measuring popularity of Facebook onion site front page
+Date: Sun, 2 Jul 2017
+
+# Overview
+
+We have been working on exploring the website fingerprinting problem in
+Tor. In website fingerprinting, either a client's guard or someone that
+can observe the link between the client and its guard is adversarial and
+attempts to link the client to its destination. This linking is attempted
+by first crawling common destinations and gathering a dataset of webpage
+features, and then training a classifier to recognize those features,
+and finally using the trained classifier to guess the destination to
+which an observed traffic stream connected.
+
+A common assumption in research papers that explore these attacks is
+that the adversary controls the guard or client-to-guard link. We are
+attempting to understand how effective fingerprinting would be for a
+weaker node adversary that runs only middle nodes, and who focuses on
+onion service websites. This involves 1.) guessing if an observed circuit
+is a hidden service circuit (already done by Kwon et al. from a guard node
+position); 2.) guessing that you are in a middle position, specifically
+the middle position next to the client-side guard; and 3.) if both 1 and
+2 are true, then guessing the onion service website based on a trained
+classifier. We would like to apply these classifiers to Tor traffic and
+use them to measure the popularity of the Facebook onion site front page.
+
+# Where we are
+
+We have already used our own client and middle to crawl the onion
+service space. Our client built circuits to a list of onions, and our
+client pinned middle relays under our control so that all circuits were
+built through our middles. The clients sent a special signal to our
+middles so that the middles could tag the circuits that were created by
+us (so that it only logged our circuits and not circuits of legitimate
+clients). Our middles then logged ground truth information about these
+circuits, as well as features that could be used for guessing the circuit
+type, position, and onion site being accessed. We used this data set to
+train classifiers and run analysis.
+
+# Where we want to go
+
+In our version of website fingerprinting, we guess the circuit type,
+position, and onion site. Since we are doing this from a middle node,
+even if all of those guesses work out, the adversary learns that someone
+with a specific guard went to a given onion site. This is not enough for
+deanonymization. Although there are several strategies that could leak
+information about the client once a middle is successful at fingerprinting
+(guard profiling, latency attacks to geolocate clients, legal attacks
+on guards), we would like to show a potentially interesting application
+of website fingerprinting beyond client deanonymization.
+
+If fingerprinting at the middle is successful, then it can be used to
+discover onion service popularity; we first identify the onion site, and
+then measure the frequency that each onion site is accessed. Because this
+measurement is done from the middle position, we will more quickly gain
+a representative sample of all circuits built in Tor (because new middles
+are chosen for each circuit with fewer biases than guards and exits). We
+would like to use PrivCount to do such a popularity measurement safely,
+following the methods and settings set out in the "Safely Measuring Tor"
+CCS paper by Jansen and Johnson. This is where we are requesting feedback.
+
+We would like to measure the following:
+1. The fraction of all circuits that we classify as hidden service circuits
+2. The fraction of hidden service circuits that we classify as accessing
+the Facebook onion front page
+
+We want to do this measurement safely, because it will involve measuring
+circuits of real users. We hope to be able to do this from the first
+client-side middle node (which will involve guessing the circuit,
+position, and the site) as well as from the rendezvous position (which
+will only involve guessing the site). The classifiers necessary to perform
+these guesses will be trained on our previously crawled onion data set
+and a dataset of circuit information that we generated synthetically
+in Shadow.
+
+During the measurement process, circuit and cell metadata will be used
+by the classifiers to make their guesses. Circuit meta-data includes
+a description of the previous and next relay in the circuit, as well
+as the previous and next circuit ID and channel ID. Cell metadata
+includes whether the cell was sent or received and from which side of
+the circuit, the previous and next circuit ID and channel ID, the cell
+type and cell command type if known, and a timestamp relative to the
+start of the circuit.
+
+The meta-data will be sent in real time to PrivCount where it will be
+stored in volatile memory (RAM); the longest time that PrivCount will
+store the data in RAM is the lifetime of the circuit. When the circuit
+closes, PrivCount will pass the meta-data to the previously-trained
+classifier, which will make the guesses as appropriate. The following
+counters will be incremented in PrivCount according to the results of
+the guesses:
+
+1. Total number of circuits
+2. Total number of onion service circuits
+3. Number of onion service circuits accessing facebook onion frontpage
+4. Number of onion service circuits NOT accessing facebook onion frontpage
+
+Once these counters are incremented, all meta-data corresponding to
+circuit and its cells are destroyed. The PrivCount counters are initiated
+to noisy values to ensure differential privacy is maintained (cf. "Safely
+Measuring Tor"), and are then blinded and distributed across several
+share keepers to provide secure aggregation. At the end of the process,
+we learn *only* the value of these noisy counts aggregated across all data
+collectors, and nothing else about the information that was used during
+the measurement process. Specifically, client usage of Tor during our
+measurement will be protected under differential privacy. (We currently
+plan to run at least 3 share keepers and more than 10 data collectors.)
+
+# Value
+
+This work has value to the community that we believe offsets the potential
+risks associated with the measurement. Understanding Facebook popularity
+and having raw numbers to report, while is in itself interesting, also
+allows us to focus a popularity measurement on the positive use cases
+of Tor and onion service rather than the not-so-positive. We believe
+that showing how website fingerprinting can be applied to purposes
+other than client deanonymization is novel and interesting and may
+spur additional research that may ultimately help us better understand
+the real world risks associated with fingerprinting techniques (which
+may lead to better fingerprinting defenses). Finally, risk from middle
+nodes is often overlooked, and we think there is value in showing what
+is possible from the position with the fewest requirements.
+
diff --git a/htdocs/trsb/2017-03-response.txt b/htdocs/trsb/2017-03-response.txt
new file mode 100644
index 0000000..20a4478
--- /dev/null
+++ b/htdocs/trsb/2017-03-response.txt
@@ -0,0 +1,118 @@
+Date: Sun, 16 Jul 2017 01:00:40 -0400
+From: Roger Dingledine <arma@xxxxxxx>
+Subject: Re: [tor-research-safety] Request for feedback on measuring popularity of Facebook onion site front page
+
+Here are some thoughts that are hopefully useful. I encourage other safety
+board people to jump in if they have responses or other perspectives.
+
+A) Here's an attack that your published data could enable.
+
+Let's say there is another page somewhere in onion land that looks
+just like the Facebook frontpage, from your classifier's perspective.
+
+In that case you're going to be counting, and publishing, what you think
+are Facebook frontpage visits, but if somebody knows the ground truth
+for the Facebook visits (and at least Facebook does), then they can
+subtract out the ground truth and learn the popularity of that other page.
+
+Counterintuitively, the more "colliding" pages there are, or rather,
+the more colliding pages there are that are sufficiently popular, the
+less scary things get, since you're publishing popularity of "Facebook +
+all the others that look like Facebook", and if that second part of the
+number is a broad variety of pages, it's not so scary to publish it.
+
+I guess you can look through your training traces to see if there are
+traces that look very similar, to get a handle on whether there are zero
+or some or many. But even if you find zero, (1) are your training traces
+just the front pages of other onion sites? In that case you'll be missing
+many internal pages, and maybe there is a colliding internal page. And
+(2) you don't have a comprehensive onion list (whatever that even means),
+so you can't assess closeness for the pages you don't know about. And
+(3) remember dynamic pages -- like a duckduckgo search for a particular
+sensitive word that you didn't think to search for. (I don't think that
+particular example will produce a collision with the Facebook frontpage,
+but maybe there's some similar example that would.)
+
+So, principle 1: publishing a sum of popularity of a small number of
+sites is inherently more revealing than publishing the sum of popularity
+of a broader set of sites, because the small number is more precise.
+
+And principle 1b: if an external party has a popularity count for a subset
+of your sum, they can get more precision than you originally intended.
+
+Unless the differential privacy that you talked about handles this case?
+I have been assuming it focuses on making it hard to reconstruct exactly
+how many hits a given relay saw, but maybe it does more magic than that?
+
+B) You mention picking Facebook in particular, and I assume you'll be
+naming them in your paper. Have you asked them if they're ok with this?
+
+Getting consent when possible would go a long way to making your approach
+as safe as it can be. I can imagine that Facebook would say it's ok,
+while I could imagine that a particular SecureDrop deployment might ask
+you to please not do it.
+
+In particular, two good contacts for Facebook would be Alec Muffett
+and <other Facebook security person anonymized for publication>.
+
+The service side is of course only half of the equation: in an ideal
+world it would be best to get consent from all the clients too. But since
+your experiment's approach aggregates all the clients, yet singles out
+the service, I think it's much more important to think about consent
+from the service side for this case.
+
+So, principle 2: the more you're going to single out and then name a
+particular entity, the more important it is for you to get consent from
+that entity before doing so.
+
+C) In fact, it would probably be good in your paper to specify *why*
+the safety board thought that doing this measurement was ok -- that
+you got consent and that's why you were comfortable naming them and
+publishing a measurement just for them.
+
+I think mentioning it in the paper is important because if this general
+"popularity measurement" attack works, I can totally imagine somebody
+wanting to do a follow-on paper measuring individual popularity for a big
+pile of other onion services, first because it would seem cool to do it
+in bulk (the exact opposite of the reason why you decided not to do it in
+bulk), and second because eventually people will realize that measuring
+popularity is a key stepping stone to building a bayesian prior, which
+could make website fingerprinting attacks work better than the default
+"assume a uniform prior" that I guess a lot of them do now.
+
+So, principle 3: explicitly say in your paper where your
+lines-you-didn't-want-to-cross are, so readers can know which follow-on
+activities are things you wouldn't have wanted to do (and so future PCs
+reviewing future follow-on papers have something concrete to point to
+when they're expressing concern about methodology).
+
+D) I actually expect your Facebook popularity measurement to show
+that it's not very popular right now. That's because, at least last I
+checked, all of the automated apps and stuff use facebook.com as their
+destination. The Facebook folks have talked about putting the onion
+address by default in their various mobile apps if it detects that Orbot
+is running, but as far as I know they haven't rolled that out yet. So
+(1) doing a measurement now will allow you to do another measurement
+later once Facebook has made some changes, and you'll have a baseline for
+comparison; and (2) if you coordinate better with the Facebook people,
+you can learn the state and expected timing for their "onion address by
+default" roll-out -- or heck, you might learn other confounding factors
+that Facebook can explain for you.
+
+E) If you're planning to use PrivCount, does that mean you are running
+more than one relay to do this measurement? I think yes because "and
+more than 10 data collectors"?
+
+For the last person who asked us about running many relays for doing
+measurements, we suggested that they label their relays in the ContactInfo
+section, and put up a little page explaining what their research is and
+why it's useful (I think the text you sent us would do fine).
+
+Here is their page for an example: http://tor.ccs.neu.edu/
+
+I think that step would be wise here too.
+
+Hope those are helpful! Let me know if you have any questions.
+
+--Roger
+
_______________________________________________
tor-commits mailing list
tor-commits@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits