[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: New standard for privacy control. (Was: Stripping code with Privoxy)

> Here's a better idea.
> Why do we make the assumption that a browser can trust everything it is
> given?
> That's a serious question. Why do browsers not have an external
> verification plug in?
> Lets say we wanted to design such a browser extension.

You haven't specififed the threat(s) you are trying to protect against. You should never try to build a solution without a clear description of the problem.

To prevent other people from tampering with data on a remote connection there is ssl. To prevent MITM Attacks there is ssl. If you do not trust the server, and want to be sure that a document is from a certain person, there is GPG, S/Mime etc.

Ok. The threat: HTML was not designed with security and privacy in mind. Blind acceptance and following of HTML code causes a wide range of problems, from revealing identity, to allowing arbitrary people to track everything you do. The usage of bad form design/submissions can give your answers as "?tag=value&tag2=value" on a URL, which can expose SSN's and other information on the URL request, visable to packet sniffers.

Remember: Java is a secure sandbox. They recognized that just allowing
arbitrary code to run was a bad thing, and put in a security model to
restrict what could be run. That is the goal here.

HTML has no security model.
There is demonstrated security, identity, and privacy risks.

Even if there is no man in the middle attack, no hacked servers, etc,
there are ad trackers, web bugs, etc. Many times those are not under
the control of the web page author, because the web page you write
becomes just a portion of the full html sent to the browser.

And of course, let's not kid ourselves - the majority of users are overwhelmed/uninformed about how to make proper use of ssl (What are CAs? How do I verify a certificates? How to react to what kind of warnings?), so it's unlikely that something more complex that works on a document basis, or fragments of documents is going to be more successful in actually reaching its goals (i.e. appropiate use rather than just wide adoption with uninformed/dangerous use).

SSL problems are just plain badly handled. Heck, even SSL non problems are badly handled.

Design an infrastructure that would permit every machine on the
internet to be given an SSL certificate, and you can solve all the SSL
problems, including people not knowing what SSL or a CA is. Anything
short of that will not solve the problem.

As for SSL, even successes: If a site I'm connecting to sends a valid
certificate, then I want to know how it's known to be valid. Tell me
that the browser's list of valid root certificates is itself signed,
and tell me the finger print of that signature. One such for
microsoft, one such for netscape, and a third such for opera, and you
have a simple "Verify these 16 bytes against this piece of paper, and
you know you can surf without forgery".

(Or how do you know that your list of trusted roots has not been compromised?)