[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-dev] Improving onion service availability during DoS using anonymous credentials
George Kadianakis <desnacked@xxxxxxxxxx> writes:
> Hello list,
>
> there has been lots of discussions about improving onion service availability
> under DoS conditions. Many approaches have been proposed [OOO] but only a few
> have been tried and even fewer show any real improvements to the availability
> of the service.
>
> An approach that we've been considering is the use of anonymous credentials as
> a way to prioritize good clients from bad clients. The idea is that the service
> gives tokens to clients it believes to be good, and prioritizes client with
> tokens over clients without tokens whenever possible. This is a post to start a
> discussion of how such approaches could work and whether they are worth
> pursuing futher.
>
> == Preliminaries ==
>
> === When should the access control take place? ===
>
> Doing DoS defenses with anon credentials is all about enforcing access control
> at the right point of the protocol so that the amplification factor of evil
> clients gets cut as early as possible.
>
> Very roughly the phases of the onion service protocol are: descriptor fetch
> phase, intro phase, rendezvous phase. Let's see how those look like for the
> purposes of access control:
>
> - Doing the access control during the descriptor fetch stage is something worth
> considering because it's the first phase of the protocol and hence the
> earliest and best place to soak up any damage from evil clients. There is
> already a form of optional access control implemented here called "client
> authorization" and it's worth thinking of what's lacking to make it useful
> against DoS attackers. I'm gonna address this in section [CLIENTAUTH].
>
> - Doing the access control during the introduction phase is another fruitful
> approach. Blocking bad clients during introduction means that they dont get to
> force the service to create a costly rendezvous circuit, and since services
> have a long-term circuit open towards the intro points it makes it easier for
> services to pass access control related data to the intro point. This is
> actually the approach we are gonna be talking most about in this post.
>
> - Finally, doing the access control during the rendezvous phase is way too late
> since by that time the onion service has already spent lots of resources
> catering the evil client, so let's ignore that.
>
> === Entities of an anonymous credential system ===
>
> Anonymous credential systems traditionally have three entities that concern us:
>
> - The Issuer: the entity who issues the credentials/tokens
> - The Prover: the entity who collects tokens and uses them to get access
> - The Verifier: the entity who verifies that tokens are legit and grants/restricts access
>
> In the world of onion services, the Issuer is naturally the onion service, and
> the Prover is the onion service client. The Verifier could either be
> the onion service itself or its introduction points. We will see below how this
> could work and the relevant tradeoffs.
>
> +--------+ +------------+ +--------------------+
> | Client |<-+-+-+-->|Intro point |<--+---+-->|Onion service |
> |(Prover)| |(Verifier?) | |(Issuer)(Verifier?) |
> +--------+ +------------+ +--------------------+
>
>
> === How do tokens get around? ===
>
> A main question here is "How do good clients end up with tokens?". For the
> purposes of this post, we will assume that clients get these tokens in an out
> of band fashion. For example, a journalist can give tokens to her sources over
> Signal so they can use them with Securedrop. Or a forum operator can give
> tokens to old-time members of the forum to be used during a DoS.
>
> A natural chicken-and-egg problem occurs here since how is an onion service
> supposed to give tokens to its users if it's unreachable because of a DoS? We
> realize this is a big problem and we are not sure exactly how to solve it. This
> problem naturally limits the use of anonymous credential solutions, and sorta
> makes them a second-layer of defense since it assumes a first-layer of defense
> that allows operators to pass tokens to the good people. A first-layer approach
> here could perhaps look like PrivacyPass where users get tokens by solving
> CAPTCHAs.
>
> == Anonymous credentials ==
>
> By surveying the anonymous credential literature we have found various types of
> anonymous credential schemes that are relevant for us:
>
> - Discrete-logarithm-based credentials based on blind signatures:
>
> This is a class of anon credential schemes that allow us to separate the
> verifier from the issuer. In particular this means that we can have the
> service issue the tokens, but the introduction point being the verifier.
>
> They are usually based on blind signatures like in the case of Microsoft's
> U-Prove system [UUU].
>
After more discussions and investigations, it does seem blind-signature
based anonymous credentials is what we are looking for here. In
particular, the ability to separate the issuer and the verifier is
essential to us because it gives us more flexibility allowing us to:
- Do the verification at either the intro point or the service.
- Allow for third-party token issuers that issue tokens that can be
later verified by the service or intro points. These third-party token
issuers can even be on the clearnet which gives us more options when
it comes to DoS defences.
For this reason I had a call with Michele Orrù today who helped me
understand our options and constraints (Michele feel free to correct me
wherever I'm wrong). In particular, it seems like we have the following
options and none of them is perfect and none of them is ultra-terrible:
1) Blinded RSA signatures
Anon credentials schemes that use blinded RSA signatures are the most
old-school approach we have in our arsenal:
https://en.wikipedia.org/wiki/Blind_signature#Blind_RSA_signatures
They are the fastest option we have (in terms of verification
performance -- our main performance concern), and the verification
protocol can be competed in two messages which fit nicely into our
introduction protocol.
The problem is that their tokens have the biggest size (in terms of
token size). In particular, using tokens of this kind means that we
will have to pass 2048-bit keys or 4096-bit keys around, plus their
padding, plus potentially other stuff.
We would need to look more on whether we can fit these schemes into
our constraints. It seems like Facebook has recently started using
RSA blinded signatures into their own anonymous credentials system:
https://github.com/siyengar/private-fraud-prevention
2) Blinded Schnorr signatures
Another approach would be to use blinded schnorr signatures over
elliptic curves: https://eprint.iacr.org/2019/877.pdf
This results in a scheme with smaller tokens than RSA but worse
performance. As Jeff mentioned, a token of this kind would be about
100 bytes, where 32 bytes would go to an identifier t, and another 64
bytes to an ed25519 signature. This is the scheme that Microsoft's
UProve is using so perhaps we can use their benchmarks to measure its
performance.
The main problem here is that this is a 3-message verification
protocol where the first message starts from verifier. Furthermore,
this first message needs to include a fresh nonce, so we could not
just use the descriptor for this first message. More research
required here to see if we can still fit it in somehow.
Another spicy thing here is that there is an attack called "ROS" that
applies to these schemes. This obscure attack allows the adversary to
obtain an extra issued token by opening multiple simultaneous
issuance connections to the issuer. There is a fix (in section 5 of
the 877.pdf paper above) with an equally weird countermeasure where
users during token issuance execute two parallel runs of the issuance
protocol but only finish one of them at random... As Jeff mentioned,
it's likely that this attack won't apply or affect our DoS threat
model, but it's something we should definitely keep in mind.
2) Blinded BLS signatures
The final approach here is doing blinded signatures using the BLS
signature scheme:
https://en.wikipedia.org/wiki/Boneh%E2%80%93Lynn%E2%80%93Shacham
https://crypto.stackexchange.com/a/12832 (yes that's an SE link...)
https://hackmd.io/@benjaminion/bls12-381
This results in a 2-message verification protocol with small tokens
(about the size of blinded schnorr sigs), but worse performance than
all of the above schemes (about 25ms eyeballed but we should look at
https://crypto.stanford.edu/pbc/ for more information).
Furthermore, this is a pairing-base cryptosystem so it comes with the
bleeding-edge-wow factor of pairings. It seems like various groups
(including zcash and ethereum) have been using BLS signatures, but
none of them have used it for blinded signatures, so we would be in
for a ride.
All in all, these are the options we seem to have. There are tradeoffs
to all of them, and potential blockers with some of them
(e.g. 3-messages of Schnorr sigs). We should dig deeper into all these
schemes, and also talk with more people in case there are schemes we are
missing or things I misunderstood about these ones.
Finally, anonymous credentials based on the above schemes are
lightweight "use-token-and-forget-it" token schemes. It's not as easy to
encode complicated attributes into them, or make them accumulate
reputation, or blacklist them. To do revocation/blacklisting here you
basically should stop issuing new tokens to the bad parties, but they
still get to use their old tokens since they are indistinguishable from
any other tokens issued. It might still be possible to achieve some of
that by splitting users into different groups by using different keys
for each group, so that additional information can be encoded on the
type of group used.
That's it for now.
Cheers!
_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev