Is there a project to collect, index and archive all the relevant
papers
from all the various internet sites, homepages, anonbib, etc... into
one central, easily mirrored and referenced repository? git would
seem more useful for this than the various disparate http resouces
of uncommon design. If the fame of the original site is needed that
would be included in the commit or a per paper paired metadata file.
This model could be extended to multimedia formats of papers via
rsync, with the index being git'd. The index itself could of course
be stored in git in html format to point browser at locally, or even
remotely over gitweb as the possible internet frontend.
There may be volnteers on tor-talk if fwd there.