[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-bugs] #4439 [Metrics Utilities]: Develop a Java/Python API that wraps relay descriptor sources and provides unified access to them
#4439: Develop a Java/Python API that wraps relay descriptor sources and provides
unified access to them
-------------------------------+--------------------------------------------
Reporter: karsten | Owner: karsten
Type: task | Status: new
Priority: normal | Milestone:
Component: Metrics Utilities | Version:
Keywords: | Parent:
Points: | Actualpoints:
-------------------------------+--------------------------------------------
Comment(by atagar):
> Can you be more precise what parts you misunderstood?
Since this was spawned by the alarming infrastructure ticket I thought in
the initial comment that we were talking about an RPC for services (like
the alarms) to request information from the metrics hosts, and that this
was an API for that. So disregard, it's obvious that has nothing to do
with this. :)
> I don't think we're talking about completely distinct functionality, are
we?
Hmmm... now I think we've just had another misunderstanding. I'm saying
that it *is* related functionality and I'd be happy to hack on it as part
of stem.
My plan was to have Relay objects which are a composite of three things...
- fingerprint (constructor arg, always there)
- consensus and descriptor data (lazily loaded, throws an exception or
returns a default value if it can't be loaded)
{{{
Relay
|- __init__(fingerprint, raise_exc = False, default = None)
|- load_consensus() - eager fetch for consensus data, returning a boolen
for if it succeeds or not
|- load_descriptor() - same for the descriptor
|- fingerprint()
|- exit_policy()
|- contact_info()
+- ... etc, getters for the union of the descriptor and consensus
created() => unix timestamp for when the currently accessible consensus
was created
valid_until() => unix timestamp for when this consensus expires
get_relays() => list of all Relay instances
get_relay(fingerprint) => provides Relay instance for the given
fingerprint
get_relay_dest(ip_address, port) => provides Relay at the given ip/port
... probably a few other things I haven't thought of yet...
}}}
It sounds like we then simply need a factory for the data source that has
those methods. Something like...
{{{
class ConsensusFetcher:
"""
Abstract parent for factories that retrieve consensus data.
"""
def created(self): pass
def valid_until(self): pass
def get_relays(self): pass
def get_relay(fingerprint): pass
def get_relay_dest(ip_address, port): pass
class CacheFetcher(ConsensusFetcher):
def __init__(path):
"""
Retrieves consensus data from the local filesystem, via cached
consensus
files. This raises an IOError if we're unable to read the given data
directory.
"""
if not os.path.exists(path): raise IOError("%s doesn't exist" % path)
# etc for implementation details
class ControlFetcher(ConsensusFetcher):
def __init__(control_connection):
# ... similar for the control connection. This is important because
there
# could be instances where we don't have read access to tor's data
# directory, but can access the control socket.
class DirectoryServerFetcher(ConsensusFetcher):
def __init__(address, port):
# ... similar for fetching directly from a directory authority or
mirror.
# This one would have a little more options compared to the others...
def is_current(self):
# True if we're working from the most recent consensus, False
otherwise.
def fetch(self):
# Retrieves the new consensus, raising an IOError if unable to do so.
}}}
Does that jive with what you were thinking of?
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/4439#comment:5>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs