[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-talk] Making a Site Available as both a Hidden Service and on the www - thoughts?



> You mean doing MITM before the browser gets to its Tor router?
> No wait, you mean browsers stupidly trying to DNS resolve .onion thus
> paving the way for a MITM attack whenever Tor isn't installed?

Both, I guess. I was thinking of where you have Tor running and then
routing to it (whether that's a gateway device or on your local machine)
rather than connecting via SOCKS. Essentially if there's a REDIRECT
iptables rule.

So Browser -> Port 53 -> Router/Tor rather than Browser -> SOCKS -> Tor.
A simplistic explanation would be anywhere that quietly dropping
127.0.0.1  foo.onion into /etc/hosts would break foo.onion for you. But,
as I say, if you're at the point you can changes on your target's
machine, there are better ways of getting what you want :)


> I can imagine a relevant chunk of Tor audience that has a Tor router
> running just to access the onions, not for anonymization of regular
> web stuffs (which they still perceive as toooo sloooow...)

Well, I can't be the only one, right? :)


> I think you're putting too much thought into making your site
> available as both hidden service and on www. It's not about if you
> can, or should you do it. It can be reduced to one thing: do you want
> to hide the origin server for the hidden service? If yes, you have to
> consider the complexity of keeping both services partitioned from each
> other. If not, then, well, you get the point.

Perhaps it's because I'm primarily an Ops guy, but I'd say it's more
complicated than that. 

It's about whether it can be done without breaking _anything_,
introducing un-necessary additional security risk (for example if the
existing Application level protections needed to be disabled, would I
want to continue?) and also of assessing what potential other risks
there are, and how best to mitigate them.

The site's live, carrying real traffic and may contain other people's
personal information - so for me, plan -> mitigate -> test -> action is
the only way to look at such a change :) 

Take the duplicate content penalty, for example, whilst it's not like I
lose my monthly income if I get it wrong, I'd still like to make sure I
avoid getting hammered in the indexes. 

Getting compromised because I failed to adequately think about the
potential security implications would be, at best, extremely
embarrassing and at worst could see customer data taken (though I don't
think that's too likely, it's still got to be considered a potential
outcome).

We're not talking about a service that millions of people will notice a
few seconds of downtime, but we are talking about a service that's live
in the wild, so IMO the principal remains the same (with some
exceptions).


> == Duplicate Content Penalty Risk ==
> ...snip...
> - and the block is easy: Tor2web already issue an "X-Tor2web" header 
> which you can detect, and then reject the > connection with a helpful 
> message. If you choose this route then the risk is mitigated somewhat.

Perfect, thanks! That's a much cleaner way to mitigate, and as you say,
it means that legitimate users can be told to visit either the https or
the .onion directly


> As observed elsewhere, we tell our infrastructure that any traffic inbound 
> from the Facebook onion site is sourced from the DHCP broadcast 
> network (169.254/whatever).

Nice, that'd give a good middle ground where the existing protections
can be left at their current level (so not weakening for the HTTPS site)
without meaning that someone tripping a protection makes the .onion
unavailable for everyone else for the length of the ban.

I'm assuming you're pushing an IP in that range into the X-Forwarded-For
header?


> Facebook generates a _lot_ of absolute URLs but we also mostly 
> comprise dynamically-rendered content; traffic which comes from 
> the onion site is tagged with a flag to denote that "when you are 
> rendering a URL for this request, use the '.onion' address rather 
> than the '.com' one".

Excellent, that was pretty much the lines of thinking I was following
for having the origin handle any absolute references (like the static
content).


> Here's a cute idea which we haven't tried yet, but are considering: 
> if you are running with "real" SSL on your onion site you can enable 
> "Content Security Policy" (CSP)
> 
> http://en.wikipedia.org/wiki/Content_Security_Policy
>
> âand it may be possible to configure CSP on your onion site such that 
> any link-clicks that go to your WWW/non-onion site are reported 
> (via POST) to an onion endpoint, permitting you to (ideally) go fix the 
> URL-rendering leak. Not tried it yet, though.

That's a nice idea, probably not going with HTTPS on the first outing,
but definitely making a note of that on my to-do list


> It does depend on your threat model, and if the threat model of your 
> interlocutor is themed around "problems which are most intuitively 
> solved by introducing a certificate authority", then I wish you courage, 
> fortitude and patience. My position now is "why not have both?" - and 
> I expend time trying to fix internet policy to permit both in the widest 
> range of circumstances.
>
> This strikes me as wise because Mozilla:
> 
> https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/
> 
> ...will soon be gating new features to work _only_ on HTTPS.

Without wanting to start a thread-in-a-thread, I've definitely got mixed
feelings on that one. I think most sites should be using HTTPS, but I
think there are also cases where HTTPS genuinely may not be
needed/desirable. 

Whether or not that's correct, I don't think my _browser_ should be the
one to dictate that (though I recognise it's a good way to add inertia
to the issue). If a site I visit only uses HTTP, it's potentially me
(the powerless visitor) who's getting penalised by Mozilla there (the example they give is access to new hardware capabilities).

Anyway, that's a completely different topic so I'll cut that ramble off
before it really begins.


> == if someone connects to you from Tor, redirect them to the Onion ==
> ...snip...
> Consider carefully any activity which might surprise people who access 
> your site over Tor.

Thinking about it, I suppose you're right - especially if I do find
myself in a position of having to disable specific functionality because
it won't work with the .onion (but for some theoretical reason does with
an exit). I guess the analogy at that point would be being redirected
the dumbed-down mobile site which doesn't contain the functionality you
wanted to use.


> If the patch to give each inbound circuit its own temporary "IP
> address" [0] were ever to be committed, then you could potentially use
> off-the-shelf protections to protect HSs. However, the local addresses
> are only ever temporarily unique, because they are derived from the
> circuit ID; the protection application would need to be carefully
> configured so that its timeouts matched the expected durations for
> which a circuit ID is expected to be unique.

For me, that seems a reasonable trade-off, and only slightly different
to the situation today on the www side. If a protection bans your IP,
you're only banned until you acquire a new IP (which might be as simple
as restarting your router if you were coming direct). 

Although it's unlikely, the another visitor _could_ have been allocated
the IP you've just released so may well not be able to access the site.
That's why bans should be configured to last just long enough to make
brute-force attempts painful, but short enough to mitigate the risk of
being enforced against the wrong end user.


I got heavily sidetracked yesterday, but hopefully I'll be putting some
time into this today (the initial set up will probably be quick enough,
it's testing that'll take some time)

Ben
-- 
tor-talk mailing list - tor-talk@xxxxxxxxxxxxxxxxxxxx
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk