[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freehaven-dev] network reliability overview from 1992




I found a 101-page survey of "network reliability modelling" from 1992.

http://www.isr.umd.edu/TechReports/ISR/1992/TR_92-74/TR_92-74.phtml

It starts out by considering "stochastic binary processes." These are 
processes in which every component has a random, independent chance of
failure. When more than a threshold of components fails, the system goes
down. This seems like the model we want for our files (each share is a
node in a graph, when more than a threshold fail, everything dies). It's
not so clear to me if this fits with what I had considered earlier.

Then they consider answering the question "If all nodes have the same
probability of failure p, what is the overall probability that enough
nodes surive such that the system operates?" It turns out that answering
this is #P-complete for general graphs. i.e. really damn hard. It doesn't
become easier if the probabilities vary. 
Fortunately, we aren't dealing with general graphs; plus maybe we don't
have to deal with this exact problem.

In addition, Venkatesan was kind enough to e-mail me a draft of his survey
paper. I've placed it on my web page at 

http://www.hcs.harvard.edu/~dmolnar/survey.ps 

I think two weeks ago someone mentioned that they knew a
probabilist-in-residence in their dorm? One of the things I'd like to do
today if we can is talk a little bit about this model. In particular
figure out just how much leeway we have with various assumptions. 
Then we can start trying to make up well-formed questions.

Thanks, 
-David