[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Encrypted Web Pages?



Martin Fick wrote on 18.12.2007 01:05:
> --- "Vlad \"SATtva\" Miller" <sattva@xxxxxxxxx> wrote:
> 
>> Have you looked at FireGPG Firefox extension?
>> http://firegpg.tuxfamily.org/
> 
> --- "Alexander W. Janssen"
> <alexander.janssen@xxxxxxxxx> wrote:
> 
>> Why not simply use the Firegpg-extension for
>> Firefox?
> 
> I had not seen this, thank you, this would 
> certainly be a valid fallback use case also.
>  
>> Obviously that's only working perfectly with
>> text-files, but you could
>> possibly try to make up your own XPI for Firefox.
> 
> Yes, I was hoping for a simple HTMLified 
> solution.
> 
> Seems like perhaps instead of implementing 
> this at the browser level, this could be 
> implemented at the proxy level.  Simply 
> send requests to a personal local proxy 
> which can intercept encrypted pages and 
> decrypt the ones it has the private keys 
> to!  This would be more versatile, usable 
> by more browsers, less vulnerable to 
> JS/other dynamic html attacks...

This approach pleases me much more. However HTTPS traffic won't please
such proxy very much unless it can handle it on his own (in MITMish way).

> Anyone want to implement it? ;)  It could
> use gpg.  Can anybody suggest a good 
> simple well written proxy which would be 
> easy to hack to add this to?

One of existing options for consideration is GPGrelay. It's intended for
mail traffic proxying, but I suppose it's not entirely impossible to
modify it for HTTP traffic (not so sure for HTTPS).

But I see another problem with your proposal -- problem with encryption
logistics if you wish. Suppose we already have such a magic wand for
web-pages transparent client-side decryption (whatever it could be).
Lets say, sender has published a website encrypted to some set of public
keys (excluding his own key for before-mentioned reasons). What if on
sudden he becomes aware of one of the recipient key's compromise? Now
sender needs to decrypt the whole site and re-encrypt it to another set
of public keys, excluding the compromised one to not let an attacker to
lay his hands on sensitive data (if it's not too late already). Problems
arises:

  1. How could sender decrypt the website if he doesn't have the
     appropriate private key?
  2. How could be data re-encrypted if not even the sender uploaded it
     (according to some earlier proposal)? He didn't had the plaintext
     in the first place.
  3. How time-consuming could be a task of re-encrypting a large
     website with a lot of pages and how much could go wrong leaking
     the plaintext data?

If my life was at stake, I wouldn't trust it to that sort of things.

And finally there is a gap in the threat model. If we treat webserver as
untrusted (or even malicious) then we can't discard a trivial option of
DoS attack: server (or hosting provider) may simply erase the contents
of the website or block access for legitimate users.

-- 
SATtva | security & privacy consulting
www.vladmiller.info | www.pgpru.com