[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Uptime Sanity Checking

On 3/8/07, Nick Mathewson <nickm@xxxxxxxxxxxxx> wrote:
I think a fix_able_ cap probably gets us most of the benefit: if we
change the cap, only the directory servers need to change their code
or configuration.

seems reasonable; the nature of the network is going to vary (perhaps significantly) with size and age...

as for a particular tunable cap:
2 nodes with uptime over 20 million
14 over 10 million
47 over 5 million
131 over 2 million
215 over 1 million
239 over 500k
456 over 200k
545 over 100k
647 over 50k
702 over 20k
753 over 10k

Really, though, this is a band-aid, and we don't want to make it too
sophisticated.  Remember that 'uptime' is a bad proxy for the property
we want the 'Stable' flag to measure.  Mean time between failures
would approximate stability better, I think.

agreed. previously long lived instances are overly penalized for a restart.

are there scenarios where a restart indicates possible vulnerability
that make aversion useful?  for instance, a server seized/cracked,
keys copied, and a rogue node comes up in it's place?

(that is, could MTBF open up other attacks that are avoided by uptime
measurement - an email in the morning: "remove my node from the
directory, it's been compromised")