[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-dev] Stem Descriptor Parsers
OK, bringing back to tor-dev...
On 7/5/12 1:44 PM, Damian Johnson wrote:
Hi Norman.
(Taking this off tor-dev@ for the moment until we get things straightened out...)
Actually, this would have been an interesting discussion for the list.
Feel free to add it back in.
The TorExport functionality seems pretty straightforward, but the stem-interesting parts (new descriptor parsers) already seem to be on Ravi's punch-list.
My understanding is that the csv work is to add export (and maybe
import) functionality to the Descriptor class. This would provide
those capabilities to the current ServerDescriptor and
ExtraInfoDescriptor classes in addition to anything we add in the
future like consensus based entities. If it really isn't possible to
do in the Descriptor then it would be an abstract method that's
implemented by Descriptors as we go along.
OK, there's the first confusion on my part; I thought the export
functionality was to be something like utility scripts rather than built
into stem itself.
So is export intended to be an instance method of descriptor, one that
just dumps a single csv line of the instance attributes (maybe subject
to some selection of those attributes)? Or a static method that takes a
collection?
It seems like it might be awkward to have to hack stem itself to add a
new export format (for example). Is this a concern?
Actually, this makes me wonder a bit about what exactly stem is. It
seems like it is:
* stem the Tor Control Protocol interface,
and
* stem the (relay, network status,...) descriptor utility library.
It seems that the former is dependent upon the latter (for stem to
provide a list of cached descriptor objects, it needs a descriptor
utility library that defines those objects), but not the reverse (the
utilities don't much care where the descriptors come from). This isn't
completely correct, since the descriptor utilities might provide APIs
for parsing some source of descriptor(s) (Tor Control message,
cached-consensus, metrics), but making the descriptor utility library
one module among many in stem makes it seem like they are more
intertwined than they appear. Of course, you've been thinking about
this a lot longer than have I. Do all the known use-cases make need
both an interface to Tor Control and a descriptor utility library? I
guess I'm not quite sure what the design philosophy for stem is.
Onionoo is always a possibility, though we'd probably need a bit more guidance on which part to work on (front end, back end, both?). But regardless, it still seems like it depends on various parsers that Ravi is working on.
For an Onionoo project I would be the main mentor for stem based
parts, and Karsten would mentor the work on Onionoo itself since he
wrote the java version (though I'll still do the code reviews). First
step would be to start a thread with both of us (and tor-dev@) to
figure out milestones.
FYI: at the Tor dev meeting Sathyanarayanan (gsathya.ceg@xxxxxxxxx)
also expressed interest in taking on this project, though with his
summer internship I'm not sure how much time he'll have to help.
Do I understand Onionoo correctly to be basically a small webservice
that returns a JSON formatted description of data read from a file based
on the HTTP request parameters, along with a program that presumably
runs with some frequency to create that file? It seems that at least
porting the webservice side to a Django webapp might be a reasonable
project for the rest of our summer.
However, the little bit of software development that I learned does make
me want to ask: Why a Python port or this component?
- Norman
--
Norman Danner - ndanner@xxxxxxxxxxxx - http://ndanner.web.wesleyan.edu
Department of Mathematics and Computer Science - Wesleyan University
_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev