[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
[tor-bugs] #12002 [Ooni]: Initial M-Lab comments on threat model
#12002: Initial M-Lab comments on threat model
-------------------------+-------------------------
Reporter: cypherpunks | Owner: hellais
Type: defect | Status: new
Priority: normal | Milestone:
Component: Ooni | Version:
Keywords: | Actual Points:
Parent ID: | Points:
-------------------------+-------------------------
This issue was automatically migrated from github issue
https://github.com/TheTorProject/ooni-probe/issues/150.
Here are initial comments on the threat model. These are minor, and many
are clarifying questions that most probably reflect a misunderstanding on
my end.
I'm happy to break these out into individual issues, if that would be more
useful. For now, this seemed like an easy way to review in total, and
eliminate those that don't deserve issue pride of place.
***Page: Roles***
*Analyst
An analyst may or may not publish the data, as I understand it. Iâm not
sure this impacts the taxonomy, but accessing and deducing meaning from
the data is the core function of an analyst -- with publication being a
very likely outcome.
Would an analyst also reply on access to the source code, and data
descriptions, such that they can vet the methodology?
**
*Bystander
Bystander could also encompass whatever entity is implicated in the
analysis of collected data. E.g. an ISP whoâs throttling service, or a
government who is censoring politically sensitive content. I think that
this category of bystander (or, another title) is important in mapping the
weight of different threats. E.g. if data is falsified and it implicates a
powerful adversary, this could be more problematic in terms of an
existential threat than false data thatâs nonsense, or implicates an
entity with little reason to care/little leverage. This, then, could be
something that is included in the threat impact table -- any false data
thatâs published is, potentially, a threat to a bystander. Whether or not
it makes sense to stretch things this far is another matter...
In M-Lab's case, bystanders also include the other tools on the platform,
and a given entity that relies on these tools and data (e.g. the FCC, and
a number of other governments rely on M-Lab in this way).
**
*Core Developer
Would this also be the role responsible for documenting the data format
such that analysts can make responsible use of it? I view this as separate
from documenting the Ooni design, although there may be a good reason to
conflate the two.
Would the core developer also review and accept/reject net-tests into the
core release of ooni-probe? I'm not seeing this process mapped anywhere.
Although it's a future feature, I think it could have implications on
threat modeling now.
**
*Net-test Developer
It seems like theyâd also rely on the Core Developers to review and
integrate their tests into the core ooni-probe release. This isnât
necessary in all cases, but assuming that's a common-ish goal...
I imagine theyâd also share responsibility (or, take sole responsibility?)
for documenting the data/data format (?)
**
*M-LabNS Operator
Would also rely on the Core Developer to integrate M-LabNS into the client
build, such that it correctly queries M-LabNS prior to choosing an OONIB
backend.
**
*Publisher
While the publisher does assume liability, this is not sole liability. The
core developers, and the net test developers also assume liability, as
does a rogue probe operator intent on injecting bad data (etc.) (whether
or not they can be identified is another question). In this context, weâre
laying out less a legally binding definition of âliabilityâ and more a map
that can identify nodes of liability, and ensure that any risk is
documented.
The publisher also shouldnât be the only role taxed with vetting the text
decks and specifications. As above, since there are multiple nodes of
liability, this should be part of the core developer role, and part of the
net test developer role. (or, to put another way, the vetting is only as
good as the documentation. (Caveat: I may be misunderstanding the
definition of "vetting" here.)
**
*Reader
Potentially also relies on access to source code (core developers
documentation), and data (publisher), such that the assertions being made
can be verified (similar to an Analyst)? This is an M-Lab meme, but one
which makes sense beyond principle, in that the more contentions the
claims made and the bigger the powers impugned, the more important it is
that those witnessing them to confirm their veracity.
**
***Page: Use Cases***
*Initial release use cases: User Features
Although it's certainly our aim, I am hesitant to confine our assumption
about a probe operator to someone well-trained. There is already buzz
around Ooni and its potential, and while we donât want to wait to create
something shiny and perfect, it would be good to scope this use case with
an understanding that there may be some less-trained, less-knowledgeable
people also attempting to run Ooni. (This includes "less knowledgeable
about their personal risk".)
Do we have an understanding of where the probe operator would access the
ooni-probe to install it? This may also be a role? Or, is this covered in
âinvoking ooni-probe with an M-Lab-specific configurationâ? The process
here is a bit obscure to me.
**
*M-Lab deployment and management
@stephen-soltesz can add detail here as well.
Ooni is also responsible for data formatting, such that the data inputted
can be processed into long-term storage by the M-Lab pipeline.
Placeholder, pending understanding of the collector policy implementation.
Itâs not clear to me how this is scoped, and the hope would be that a
given collector can choose only to accept data that fits X specification
(e.g. whitelisting the current Alexa top X00 URLs), and not rely simply on
affirmation from the client that a given data input comes from a given
test deck.
**
*Record quality and usefulness
In the case of M-Lab, the core developers are also responsible for
ensuring, insofar as possible, that the policies and tests deployed derive
only from active, client-initiated tests. (This was a part of the
application process, so many moons ago.) Ensuring this ongoing will
require some process element, but will be an important interaction between
M-Lab (and any publisher) and the core developers.
**
*Future use cases
The âhistorical dataâ item will also imply metadata tagging on the part of
a publisher (ideally).
**
***Page: Threats***
*Overall comment on a pretty impeccable taxonomy -- In the case of M-Lab
operating in the role of OONIB operator, collector, publisher, any
malicious attack that harms the operation of the infrastructure, or
overall data collection, impacts bystanders, insofar as there are other
tools running on the infrastructure, and other sources of data that must
be collected for publication. I will leave this comment general to allow
LA to fine-tune, or disregard. Happy to help elaborate if helpful.
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/12002>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs