[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-talk] HSTS forbids "Add an exception" (also, does request URI leak?)
Have you thought about running your own email server.
From: tor-talk [mailto:tor-talk-bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf Of Need Secure Mail
Sent: Wednesday, August 8, 2018 12:28 PM
Subject: Re: [tor-talk] HSTS forbids "Add an exception" (also, does request URI leak?)
On August 8, 2018 1:57 PM, Matthew Finkel <matthew.finkel@xxxxxxxxx> wrote:
> Right. This is the recommendation in the RFC . It would be
> counter-productive if the webserver informed the browser that the
> website should only be loaded over a secure connection, and then the
> user was given the option of ignore that. That would completely defeat
> the purpose of HSTS.
>  https://tools.ietf.org/html/rfc6797#page-30
> Section 12.1
Thanks, I was already quite familiar with the RFC. I know its rationale.
But it is an absolute rule that *I* get the final word on what my machine does. That is why I run open-source software, after all. I understand that most users essentially must be protected from their own bad decisions when faced with clickthrough warnings. I have read the pertinent research. It's fine that the easy-clickthrough GUI button is removed by HSTS. However, if *I* desire to "completely defeat the purpose of HSTS", then I shall do so, and my user-agent shall obey me. I understand exactly how HSTS works, and I know the implications of overriding it.
>> This error made me realize that Tor Browser/Firefox must load at
>> least the response HTTP headers before displaying the certificate
>> error message. I did not realize this! I reasonably assumed that it
>> had simply refused to complete the TLS handshake. No TLS connection, no way to know about HSTS.
> Why? There are three(?) options here:
> 1) The domain is preloaded in the browser's STS list, so it knows
> ahead of time if that site should only use TLS or not.
Although I did not check the browser's preload list, I have observed this on a relatively obscure domain very unlikely to be on it...
> 2) The domain is not in the preloaded list, so the browser learns
> about the website setting HSTS on its first successful TLS connection
> and HTTP request.
...as to which I had never yet successfully made a TLS connection in that temporary VM, with a fresh Tor Browser instance which had never before visited *any* sites...
> 3) The user previously loaded the site and the browser cached a STS
> value for that domain.
...and thus of course, could not save anything from previous loads of the site. My whole browsing setup is amnesiac. I literally use a new VM with "new" Tor Browser installation for each and every browsing session. No cached STS value!
>> Scary. How much does Tor Browser actually load over an
>> *unauthenticated* connection? Most importantly, I am curious, does it
>> leak the request URI path (including query string parameters) this
>> way? Or does it do something like a `HEAD /` to specifically check
>> for HSTS? No request headers, no response headers, no way to know
>> about HSTS. Spies running sslstrip may be interested in that.
> No? This was one of the main goals of HSTS. It should prevent SSL
> stripping (for some definitions of prevent).
Key phrase: "for some definitions of prevent".
Inductive reasoning: For a site not in the STS preload list and never before visited, the only means for the user-agent to know about STS is to receive an HTTP response header. The only means to receive an HTTP response header, is to send HTTP request headers. Assume that the browser does not make an HTTP request. How does it know that the site uses STS?
The HTTP request headers themselves may be useful to spies. Without the request headers, a network evesdropper only knows the hostname of the request (via SNI, RFC 6066). With the request headers:
0. The request path informs the evesdropper about which news articles I am reading on www.newspaper.dom, which people I communicate with on www.socialmedia.dom, etc.
1. Query string parameters, if any, are exposed. On many sites, this can be a severe privacy problem. On some (badly-designed) sites, it can also be a security issue.
2. Some browser fingerprint information is exposed. This is a lesser issue with Tor Browser requests from a Tor exit; any tcp/443 traffic from a Tor exit can be presumed Tor Browser unless demonstrated otherwise. However, the principle with TLS should be: Do not expose anything on the network which is not exposed by TLS itself (or lower network layers).
The most sensitive information is the request path. If the user-agent wishes to ascertain HSTS status upon a certificate validation error, it could perform a fake `HEAD /` request as I suggested upthread; indeed, it is only means of receiving an HSTS response header without potentially leaking the request path to an sslstripper. I do not know if Tor Browser already does this, and have not checked too carefully. I did glance back through RFC 6797's advice to implementors, and saw nothing about this issue.
> I'm also not sure if you're referring to public key pinning, as well.
No, I am referring only to HSTS. I have read both RFCs. I would not confuse them.
I would *not* override a HPKP pin, unless I saw it very well-documented that a site had committed "pin-suicide". My interest in this thread is overriding HSTS, due to the issue raised by nusenu on the parent thread. There exist sites with valid certificates, which are inaccessible in Tor Browser due to omission of a needed intermediary cert in the cert chain set in the webserver configuration. If the site uses HSTS, there is no supported means to override this -- only the semi-undocumented trick I disclosed upthread.
P.S., my apologies to the list that Protonmail stripped all In-Reply-To and References headers when I changed the subject line. Aargh. I need to get back to my normal mail client... See also:
Sent with ProtonMail Secure Email.
tor-talk mailing list - tor-talk@xxxxxxxxxxxxxxxxxxxx
To unsubscribe or change other settings go to