[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Proposal of a new hidden wiki



"you'd have to have some method to prevent an attacker from simply
launching a massive amount of sites under this key to destroy it. 1000
fake sites would cause probably cause the load balancer to refer to
the fake sites the majority of the time, effectively take it off line."

The way we were describing, by giving trusted servers the private key
to make a redundant wiki system wouldn't have that problem unless on
of the trusted servers gave away the key or got taken over by an
adversary (police or what have you).

I actually thought about this a lot before the thread started. A
standard installer CD for a customized linux distro could be made that
when installed would ask for your hidden service private key. Then, it
would have a small partition on a local, encrypted drive for apache,
sql, and whatever else you would need which would (in the best
situation) run off an external hard drive. Then you would have a
network raid array that ran over tor, so when a wiki edit was made it
was made to that raid array and everything would be updated, almost
instantly. Does anybody see any potential problems there?
Comrade Ringo Kamens

On 8/9/07, Josh McFarlane <josh.mcfarlane@xxxxxxxxx> wrote:
> On 8/8/07, Ringo Kamens <2600denver@xxxxxxxxx> wrote:
> > I appreciate the concern, but I think that while freenet is a viable
> > option and certainly there should be a backup on it, tor users need a
> > central link cache (so they can use the tor hidden network). I think
> > that tor is the right network for unbreakable hidden website,
> > especially if we use redundant services (through RAID-over-network?).
> > The reason we can do this on the "real internet" is that it would get
> > censored. Really quickly. Many countries have laws banning such
> > activities or linking to certain sites, like cryptography sites, which
> > is why tor links must be linked to from a hidden wiki.
>
> The problem is setting up a system that is easy to replicate, but at
> the same time immune to attacks.
>
> If we attempted to implement an automated distributed service
> function, where you could have multiple sites under the same key,
> you'd have to have some method to prevent an attacker from simply
> launching a massive amount of sites under this key to destroy it. 1000
> fake sites would cause probably cause the load balancer to refer to
> the fake sites the majority of the time, effectively take it off line.
>
> Perhaps the best way to do a distributed service like this would be to
> allow it to have a list of sites that it allows to substitute for
> itself.
>
> So, if three people wanted to host the Wiki, they would first have to
> set up a method to keep all 3 wikis current.
>
> Then, all three could launch their wikis. After the service was
> operational, they could update their service definition with 'links'
> to the other 2 services. Anytime someone requested one of the
> services, it would randomly choose between any of the listed.
>
> This would require the least amount of change but allow a basic
> multiple-backend service to go up without allowing it to be
> compromised by an attacker.
>
> One could argue that you could also put the replication into Tor, but
> I think that may be too complex to integrate directly.
>
> Any ideas / comments?
>
> --
> Thanks,
> Josh McFarlane
> http://www.joshmcfarlane.com
>