[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-dev] Draft sketch document with ideas for future crypto ops
What about this for modification resistance?
We keep a count of all cells passing and use AES in CTR mode with a 2 part counter: the first part the cell counter, the second one a block counter. Then to authenticate the cell we can use a 16 byte tag and a Wegman-Carter MAC. This gives aÂtotal overhead of 48 bytes for a three hop link, which is half the cited one, and which
is provably as secure as AES.
ChaCha is a component part of one of the SHA-3 finalists, namely JH. If JH is selected as the SHA3 candidate, this may (read may) entail something about the security of ChaCha. The HAIFA construction JH uses doesn't say much about proofs of security, unlike the sponge papers.
ECC groups don't matter much: there are some really bad choices out there, but any attack against Curve25519 is
going to probably imply an attack against NIST P-256 as well. Its not that different from picking a particular DH prime: some are bad because p-1 is smooth, but if not then they all look the same.
2012 is coming soon: The schedule says between March and June of this year SHA3 will be announced. Everything after that involvesÂbureaucracy. Why switch to SHA256 and then to SHA3 when we won't be done before March anyway?
In general options are bad in crypto: we should migrate as few times as possible to avoid various attacks that might involve degrading a negotiation. As for Certicom their patents were licensed by NIST. Unfortunately for them they openly admit on their website that these are mostly implementation patents. To my untrained eye DJB's analysis is spot on. There is alsoÂ
http://tools.ietf.org/html/rfc6090 which seems to do something similar.
Sincerely,
Watson Ladd
On Mon, Oct 31, 2011 at 8:25 PM, Nick Mathewson
<nickm@xxxxxxxxxxxxxx> wrote:
Here's an early draft document trying to sketch out the parameters for
migrating to better crypto ops and designs in the future.
Comments are invited, even comments of the form "you will need to be
much more specific here before I can say anything sensible."
This is proposals/ideas/xxx-new-crypto-sketch.txt in the torspec repository.
Title: Design sketch for new crypto ops
Date: 31 Oct 2011
Author: Nick Mathewson
0. Overview
ÂThe point of this document is to discuss what crypto we ought to be using.
ÂSee "Initial Thoughts on Migrating Tor to New Cryptography" from last year
Âfor general guidelines and principles.
ÂIn broad strokes, the parts of our crypto are:
 ÂIDENTITY KEYS AND FINGERPRINTS
   Addressed here in Section 2.
 ÂLINK CRYPTO (TLS) --
   Addressed in proposals 176, 184. ÂWe say a little here in section 5,
   though.
 ÂCREATE/EXTEND CRYPTO --
   Addressed in xxx-ntor-handshake.txt and rransom's EXTEND draft at
   [*] and subsequent discussion on the tor-dev mailing list. ÂNot
   considered here.
 ÂRELAY CRYPTO
   Addressed here in Section 6.
 ÂDIRECTORY SYSTEM
   Addressed here.
 ÂHIDDEN SERVICE SYSTEM
   Addressed in a forthcoming document by rransom.
[*] https://lists.torproject.org/pipermail/tor-dev/2011-March/002547.html
1. Base algorithm choice
ÂThere seem to be two main candidate algorithms for signatures: RSA
Âwith big keys (hereinafter "RSA>1024"); and Ed25519, which is DSA with
Âthe sharp edges filed off on an Edwards curve related to DJB's
ÂCurve25519. ÂWe can look at other ECC groups too. Â{But see ECC
ÂNotes in 1.1 below.}
ÂFOR DIFFIE-HELLMAN: Curve25519 seems like a decent choice; failing
Âthat, one of the NIST P-groups. ÂFailing that, DH on Z_p with big
Âgroups (hereinafter "DH>1024"). Â{But see ECC Notes in 1.1 below.}
ÂFOR A HASH FUNCTION: SHA256, switching to SHA3 in 2012 when it comes
Âout. ÂIt might be worthwhile waiting for SHA3 in most places and
Âskipping over the SHA256 stage entirely.
ÂFOR A STREAM CIPHER: AES-CTR is in one sense a conservative choice
Âinasmuch as AES is well-analyzed, but AES's well-known issues with
Âcache-based timing attacks are pretty worrisome. ÂWe can mitigate that
Âsome by using random secret IVs for AES-CTR, so that we will be
Âencrypting neither attacker-chosen nor attacker-known plaintext with
Âour AES cipher, but that's a bit kludgy. ÂThere are also supposed to
Âbe time-invariant implementations that use Intel's AESNI instructions
Âwhere available, and time-invariant implementations that use
Âbit-slicing.
ÂSalsa20 is what rransom likes these days, but IMO we aren't competent
Âto tell whether it looks good or not; the existing attacks against it
Âdon't look like very bad news to me, but who knows whether it's
Âgetting enough attention that we can read. ÂSee also ChaCha; see also
Âthe other eSTREAM winners/finalists; see also SHA3 if the SHA3 winner
Âspecifies a way to use it as a stream cipher, or specifies an
Âunderlying stream/block cipher.
ÂIf we're feeling cautious, we could run two independently-keyed stream
Âciphers and xor their streams together.
ÂFOR A RANDOM NUMBER GENERATOR: We currently use OpenSSL seeded with
ÂRAND_poll and with platform entropy. ÂOpenSSL uses a message-digest-
Âbased algorithm from SSLeay (See http://linux.die.net/man/3/sslrand
Âfor the ugly details.) ÂThe platform entropy management can be messy,
Âobscure, or both. ÂI suggest that:
 Â* We should seed our PRNG with more entropy sources if we can find
  Âsome promising code with an appropriate license
 Â* Instead of just using OpenSSL's PRNG, we should use OpenSSL's
  ÂMD-based PRNG xor'd with some other good PRNG. Â(Fortuna,
  Âmaybe. Is there a combine operation better than xor? See also SHA3
  Âif the SHA3 winner is one that specifies a PRNG mode of
  Âoperation.)
 Â* We should consider splicing this combined-stream PRNG into OpenSSL
  Âas the RNG it uses for SSL and key generation.
 Â* We should re-seed the RNG before and after very sensitive
  Âoperations, like private key generation.
1.1. ECC notes
ÂECC is the brave new[*] crypto of the future! ÂIt's faster[**] than
Âdoing crypto in Z_n (as we do for RSA and DH now) for equivalent
Âlevels of security, and the resulting outputs are much shorter.
ÂAs near as I can tell as a layman, Certicom is muddying the waters as
Âmuch as possible wrt claiming that it's nigh-impractical to deploy ECC
Âwithout licensing their patents. ÂThis is rather like the silliness
Âthat PKP used to pull back in the day, where they claimed that their
Âpatents covered not only the existing public key cryptography
Âalgorithms, but also the very idea of public key cryptography itself.
ÂDJB claims that for every patent he's aware of, either that patent
Âdoesn't cover his code, or that patent is invalid because of prior
Âart. ÂI'm not going to try to evaluate these claims, since I'm not
Âsupposed to be reading patents for typical "let's avoid the appearance
Âof knowing infringement" reasons. ÂBut before we dive into the world
Âof ECC, we should see if we can ask any friendly patent attorneys and
ÂECC experts for a second or third opinion here.
ÂI note in passing that nearly all of the patents that DJB mentions in
Âhis list would appear to expire over the next 12 months or so.
ÂAdditionally, there are ECC groups out there less fast than DJB's, but
Âmore widely available and analyzed. ÂWe should consider some of those
Âtoo.
ÂOne final issue to investigate is whether using these algorithms will
Âmake any major free software distribution decide not to include us. ÂI
Âseem to recall seeing that one or two of the big ones had at one point
Âdecided to ship OpenSSL only with ECC disabled, either because of real
Âpatent concerns, or because of an opinion that the Certicom license
Âfor ECC use in TLS was problematic for free software, or something
Âlike that. ÂWe should check that out.
Â[*] Actually, it's older than onion routing, and older than some
Âmembers of the Tor Project.
Â[**] Actually, because of the common practice of choosing a small-ish
Âprime value (65537) for e in RSA, RSA public key operations can be a
Âlittle faster than equivalent-security ECDH or ECDSA operations. ÂThe
Âprivate key operations in RSA are still much much slower.
2. New identities
ÂIdentity keys and their fingerprints are used:
 Â- To sign router descriptors.
 Â- To identify nodes in consensus directories.
 Â- To make sure we're talking to the right node in the link handshake.
 Â- To make sure that the extending node is talking to the right next
  Ânode when sending an extend cell.
 Â- To identify particular nodes in the hidden service subsystem.
 Â- To identify nodes in the UI in various places.
 Â- Internally, to identify a node uniquely in the codebase.
 Â- To determine which part of the circuit ID space to use on a Tor
  Âinstance's links.
2.1. New identities, option 1: "RSA>1024, slow migration"
ÂIn this option, we use RSA for identity keys indefinitely. ÂNearly all
Âoperations done with an identity key are signature checking; signing
Âhappens only a few times an hour per node even with pathological
Âcases. ÂSince signature checking is really cheap with RSA, there's no
Âspeed advantage for ECC here. Â(There is a space advantage, since the
Âkeys are much smaller.)
ÂThe easiest way to migrate to longer identity keys is to tell all Tors
Âto begin accepting longer identity keys now, and to tweak all our
Âprotocols so that longer RSA identity keys are understood. ÂWe should
Âthen have a pair of parameters in the consensus that determines the
Âlargest and smallest acceptable identity key size in the network.
ÂClients and servers should reject any keys longer or shorter than
Âspecified. ÂOnce all versions of Tor can accept long identity keys, we
Âraise the maximum size from 1024 to somewhere in the 2048-4096 range.
2.2. New identities option 2: "RSA>1024, faster migration"
ÂIn this option, we use RSA for identity keys indefinitely as above.
ÂBut we allow nodes to begin having longer identities now, even though
Âolder Tors won't understand them. ÂThis implies, of course, that every
Âsuch node needs to have at least 2 identities: one RSA1024 identity
Âfor backward compatibility, one RSA>1024 identity for more secure
Âidentification.
ÂWe would have these identities cross-certify as follows: All keys
Âwould be listed in the router descriptor. ÂRSA>1024 keys would be
Âcalled something other than identity-key, so as not to confuse older
Âclients. ÂA signature with the RSA>1024 key would appear right before
Âthe current RSA1024 signature. ÂThis way, signed material would
Âinclude both keys, and would be signed by both keys.
  [In other words, descriptors would look something like:
  Ârouter foo...
  Â...
  Âidentity-key
  Â-----BEGIN RSA KEY-----
  Â1024-bit RSA key here
  Â-----END RSA KEY-----
  Âext-identity-key
  Â-----BEGIN RSA KEY-----
  Â3072-bit RSA key here
  Â-----END RSA KEY-----
  Â...
  Âext-signature
  Â-----BEGIN SIGNATURE-----
  Âsignature of everything through "ext-signature\n",
  Âusing the long key
  Â-----END SIGNATURE-----
  Ârouter-signature
  Â-----BEGIN SIGNATURE-----
  Âsignature of everything through "router-signature\n",
  Âusing the short key
  Â-----END SIGNATURE-----
  ]
ÂSee "UI notes" in the "new fingerprints" section below for some of the
Âimplications of letting nodes have multiple identity keys.
ÂWe'll need to advertise these new identities in consensus directories
Âtoo; see 4.2 below for more info there.
2.3. New identities option 3: "RSA>1024 and/or Ed25519, faster migration"
ÂAs in option 2 above, but new keys can also be Ed25519. ÂIf we expect
Âthat not all installations will allow Ed25519 (see "ECC Notes",
Âsection 1.1), we'll need to say that every server with an Ed25519 key
Âmust also have an RSA>1024 key.
2.4. Implications for current use of identity keys
ÂLet's review our use of identity keys again and make sure that we can
Âhandle all of them with the ideas above.
 Â- To sign router descriptors.
ÂWe discussed this in 2.2.
 Â- To make sure we're talking to the right node in the link handshake.
ÂThe current v3 link handshake can handle presenting multiple identity
Âcertificates in the CERT cell. ÂWe should consider ourselves to be
Âconnected to a node with identity X if _any_ of the identity
Âcertificates that it presents in its authenticated CERT cell has
Âidentity X. ÂTo handle EXTEND cells correctly, we should verify every
Âidentity we can.
 Â- To make sure that the extending node is talking to the right next node
  Âwhen sending an extend cell.
ÂThe new extend cell format needs to allow the client to tell the
Âextending node about some identity for the destination node that the
Âextending node will be able to understand. ÂThis is a capability of
Âthe extending node that the client needs to be able to check. (Also,
Âthe extend cell needs to hash that identity in a form the extending
Ânode can understand, but that's a fingerprint issue.)
 Â- To determine which part of the circuit ID space to use on a Tor
  Âinstance's links.
ÂWe can continue to use RSA1024 identity key comparison here by
Âdefault. ÂWe can also use some other parameter of the v3 handshake, or
Âintroduce a new link protocol where if the initiator authenticates,
Âthe initiator always gets the low circIDs and the responder always
Âgets the high ones.
 Â- To identify nodes in consensus directories.
 Â- To identify nodes in the UI in various places.
 Â- Internally, to identify a node uniquely in the codebase.
ÂSee sections 3 and 4 below.
 Â- To identify particular nodes in the hidden service subsystem.
ÂOut of scope.
2.5. Migrating away from short ID keys entirely
ÂEventually, no version of Tor that requires 1024-bit identity keys will
Âremain. ÂWhen that happens, we should stop using them entirely. ÂThat
Âmeans that if we take any path other than the "slow migration" path of
Â2.1, we'll need to make everything that looks at a node's identity
Âalso accept nodes with _only_ a RSA>1024/Ed25519 identity.
ÂAt the directory service level, we should have an option to allow
Ânodes without RSA1024 identity keys (off until all clients and nodes
Âaccept new identity keys).
2.6. Selective correctness attacks
ÂFor any scheme based on having multiple signature types on a router
Âdescriptor or other document, an attacker could mount a partitioning
Âattack by making a document which older clients will accept but newer
Âclients will reject.
ÂIt's easy to prevent this at the consensus step: directory authorities
ÂMUST NOT accept any descriptor unless all clients will be able to
Âverify it.
ÂFor bridge descriptors, we need to investigate more carefully.
3. New fingerprints
ÂRight now we compute fingerprints by taking the SHA1 hash of an ASN1
Âencoding of the RSA1024 identity key. ÂWe encode this in hex almost
Âeverywhere, and sometimes prefix it with a $.
ÂI propose that fingerprints of the future be determined by taking a
Âdigest using SHA256 or SHA3 of:
  Â"Hash Algorithm Name", "Key Type Name", encoded key
ÂWhen representing these internally, we should include the hash
Âalgorithm that was used. ÂWhen representing them in the UI, we should
Âuse the notation %b64, where b64 is a base-64 encoding, omitting the
Âtrailing =s.
Â(Other plausible characters to use are @, ?, +, ~, =, etc. ÂI like %,
Âbut can be persuaded. ÂBikeshed bikeshed bikeshed.)
ÂSince 43 base-64 characters is enough to represent a 256-bit digest,
Âwith 2 bits left over, I propose that the b64 value encode
  Âhh | D(hash algorithm name, key type, encoded key)
Âwhere hh is a 2-bit value, with one of the following values:
  Â00 -- sha256
  Â01 -- sha3
  Â10 -- to be determined
  Â11 -- reserved.
ÂWe should investigate in the interface whether it's plausible to allow
Âa prefix of a node ID where the full ID would otherwise be required.
ÂThat seems risky for short prefixes, though.
3.1. How many fingerprints is that anyway?!
ÂSuppose that we allow sha256 and sha3 as hash algorithms, and we allow
Âeach node to have 3 identity keys: one RSA1024, one RSA>1024, and one
ÂECC. ÂThen we would have 7 fingerprints (6 plus the legacy
ÂSHA1(RSA1024) fingerprint), for a total of 20+6*32==212 bytes per
Ânode.
ÂIt's not a horrible problem to accept them all in the UI, but the UI
Âisn't the only place that needs to know fingerprints. ÂInstead, let's
Âsay that RSA1024 identities are only identified with SHA1 hashes.
ÂThis limits our fingerprint load to a more manageable 20+32*2 == 84
Âbytes per node. ÂStill not great, though.
3.2. What does this imply for the UI?
ÂIn the UI we'll lose the property that no node has more than one
Âfingerprint: I do not believe that this actually hurts us.
3.3. Implications for directory information
ÂClients must know a hash for each node's identity key, or else they
Âcan't make an authenticated connection to the node or tell ORs how to
Âextend to the node.
ÂThis means that if client Alice wants to connect to node Bob, Alice
Âmust have a fingerprint of Bob's ID key such that she understands the
ÂID key type and the fingerprint algorithm. ÂIf Alice wants to extend
Âfrom Bob to Carol, she must have a fingerprint of Carol's ID key such
Âthat Bob understands the ID key type and the fingerprint algorithm.
ÂSo for every node, Alice must not only know a fingerprint that *she*
Âcan use for that node, but also a set of fingerprints such that every
Ânode can understand at least one fingerprint in the set.
ÂThis implies a proliferation of fingerprints! ÂWe should tread
Âcarefully here. ÂTo prevent proliferation, the easiest solution is not
Âto add too many new types and to have a good plan for retiring older
Âtypes.
3.4. Implications for EXTEND cells
ÂAs mentioned in 3.3, when a client Alice tells node Bob to extend
Âto node Carol, she needs to give Bob a fingerprint for Carol that Bob
Âwill understand: one where Bob understands the digest algorithm, and
Âunderstands the identity key type.
ÂThere are two ways we can do this:
 Â1) Alice's EXTEND cell contains every fingerprint for Carol that
   Alice knows about. ÂBob treats the cell as valid if every one he
   can verify is correct.
 Â2) Alice knows which fingerprint types Bob understands (either via
   his version, or something else in his directory info). ÂShe
   selects a fingerprint for Carol using the best one of these
   types.
ÂThe first seems more robust to me, if we have space for enough bytes.
ÂIf we proliferate too many types, though, we'll need to do the second.
4. Directory changes
4.1. Better cross-referencing
ÂIn some places, directory objects cross-reference one another by SHA1
Âhash. ÂThey should use a better hash algorithm instead.
ÂThis does make problems in a few cases.
ÂRouter descriptors and extrainfo descriptors:
  One problematic case is in determining node families. ÂIf node A
  and node B want to list each other as being in the same family,
  they need to do so in a way that clients can interpret. ÂThat could
  mean listing SHA1-RSA1024 fingerprints so old clients understand,
  AND new fingerprints for security. (But *that* could create
  interesting partitioning attacks wherein your family looks
  different depending on who's looking.)
   Solution: we need to move the responsibility for combining node
   families into the consensus voting process, so clients don't
   need to understand the cross-reference types themselves.
  Another case is in certifying extrainfo documents from descriptors.
  For that, we can list multiple extrainfo digests, either on the
  extrainfo line, or on additional lines.
ÂVoting and consensus documents:
  Adding more fingerprints in votes isn't a problem; votes are a tiny
  fraction of authority bw usage. ÂAdding more hashes is easy.
  For consensus documents, we ought to have flavors that you can
  download depending on what set of fingerprint types you
  understand.
  For integrity purposes, consensuses can refer to microdescriptors
  or descriptors by any digest type that the client understands. ÂBut
  for downloading purposes, the digest type must be one that
  directory caches also support: see 4.4.
4.2. More fingerprints
ÂBecause extending from node A to node B requires that we have node B's
Âfingerprint in a way that node A will understand, it is not enough to
Âget a set of identity fingerprints for each node in the format that
Âthe client likes best -- see 3.3 and 3.4 above. ÂSo every flavor of
Âconsensus we serve needs to include a node identity in a format the
Âclient understands, and node identities in formats such that every
Ânode will understand at least one.
4.3. An option: compound signatures on directory objects
 In Tor 0.2.2.x and later, when we check a signature on a directory
 object (not including hidden service descriptors), we only look at
 the first DIGEST_LEN bytes of the RSA-signed data. ÂOnce 0.2.1.x is
 obsolete, or on any types of signatures not checked in 0.2.1.x, we
 can use the rest of the space. Â(We're using PKCS1 padding on our
 signatures, which has an overhead of 11 bytes. ÂSigning a SHA1 hash
 with a 1024-bit key therefore leaves 128-11-20==97 more bytes we
 could use for a SHA2 or a SHA3 hash.)
4.4. Downloading by digest
 We should have directory caches support downloading objects by more
 hash types. ÂRight now, descriptors are downloaded by their SHA1
 hashes and microdescriptors by their SHA256 hashes. ÂThis is okay for
 now, but once SHA3 is out, we should support downloading all of these
 by SHA3 digest.
5. Link crypto changes
ÂCurrently we use TLS. ÂThat's fine.
ÂWe should however look to longer link keys, bigger DH groups, etc.
ÂOnce TLS versions 1.1/1.2 are available in OpenSSL, we should move to
Âuse them, I think. ÂWe should also look into how quickly we can
Âdeprecate TLS 1.0 and SSL <= 3 usage.
6. Relay crypto changes
ÂThere are a few things we might want out of improved relay crypto.
ÂThey include:
 - Resistance to end-to-end bitwise tagging attacks.
 - Better resistance to malleability.
 - If using counter mode, no block-cipher operations on any value
  known to the attacker.
ÂI'll try to provide these in increasing order of difficulty. ÂNone of
Âthese is necessarily correct; I should look for a security proof or a
Âbetter construction for any that we seem likely to use.
ÂRationales: Our existing malleability resistance is a kludge. ÂDoing
Âno block-cipher ops on attacker-known values increases our security
Âmargins a little. ÂOur arguments about tagging attacks hold that an
Âattacker who controls both ends has plenty of ways to win even if
Âtagging attacks are foiled; nonetheless, most of these ways are
Âtechnically slightly more difficult than xor-based tagging, and it
Âcould be useful to boost our defense-in-depth a little bit, just in
Âcase other active end-to-end attacks turn out to be harder than we'd
Âthought.
6.1. Option 1: Use AES-CTR in a less scary mode
 When doing key expansion, in addition to establishing Kf, Kb, Df, and
 Db, also establish IVf and IVb. ÂUse the current relay crypto, except
 instead of starting the counters at 0, start them at IVf and IVb.
 This way, an attacker doesn't have any known plaintexts to work with,
 which makes AES a little more robust.
6.2. Option 2: As 1, but tagging attacks garble the circuit after one block.
 Keep an HMAC of all previously received encrypted cells on a circuit.
 When decrypting a cell, use this HMAC value to determine the first 64
 bits of the counter; increment the low 64 bits of the counter as
 usual.
 This way, if an adversary flips any bits before passing the stream
 through an honest node, no _subsequent_ block will be recoverable.
 To prevent any part of the stream from being re-used, close any
 circuit if the low 64 bits of the counter would ever wrap (that is,
 around 295 million terabytes).
 (If we're using a stream cipher with fast re-key, then we can just
 have the key used for each block be an HMAC of all previously
 received ciphertext.)
6.3. Option 3: As 1, but tagging attacks garble the circuit in the same block.
 Use a large-block cipher mode, such as BEAR or LIONESS (depending on
 whether we need a PRP or SPRP). ÂBase the key material for each block
 on an HMAC of all previous blocks' ciphertexts.
 This way, if an adversary makes any alteration in a block, that block
 and all subsequent blocks will be garbled. ÂIt's more expensive than
 2, though, especially if we need to use a LIONESS construction.
 {I considered IGE here, with a trick where odd-numbered nodes on a
 circuit start from the front of the block and even-numbered nodes
 start from the end, but it didn't seem much better. ÂWe should
 investigate relative performance, though.}
6.4. Option 4: Shall we have middle nodes be able to fast-stop bad data?
 In all the above options, if a cell is altered, the middle node can
 at best turn that cell and the rest of the cells on the circuit into
 garbage, which the last node won't deliver (if honest) or can't
 deliver (if dishonest).
 Might we prefer to do as in mixnets, and have nodes kill circuits
 upon receiving altered cells?
 It's not such an obvious improvement. ÂIncluding more MACs is more
 expensive in per-cell overhead. ÂThe attacks that we would foil this
 way but not with Option 3 are not so much better than the the passive or
 timing-based-active end-to-end attacks that would still remain.
 Consider that if option 3 is in place, an end-to-end attacker who
 wants to do a tagging attack at one node can garble the rest of the
 circuit and see if the output is garbled at the exit node. ÂBut such
 an attacker could just as easily close the circuit at one of those
 nodes and watch for a corresponding close event, or even better --
 simply pause traffic on that circuit for a while and watch for a
 corresponding gap at the exit. ÂThe only advantage of the garbling
 attack would be that garbled cells are presumably rarer than circuit
 closes or traffic pauses, and thus easier to use to distinguish
 target circuits. ÂBut that's still questionable: the other attacks
 win fine, and the pause attack doesn't risk detection as much.
 So why might we want to do this? ÂFirst, the overhead doesn't need to
 be as bad as you might first expect (see below). ÂSecond, it would be
 nice to increase the security margin as much as possible: "attacks
 only get better".
 So let's figure out how it would look.
 To do this one, we'd want to have outgoing and incoming circuits
 treated differently. ÂIncoming cells would get decrypted as in 1
 above, except that we'd have a MAC on them. ÂFor outgoing cells,
 each node would check that the first N bytes of the cell
 match a MAC of all data seen so far, *including the rest of the
 cell*. ÂThey'd then remove the first N bytes, re-pad the cell
 with bytes from a PRNG, and decrypt the resulting re-padded cell.
 (This is basically how mixmaster works, and how mixminion works in
 the common case.)
 The space overhead here is kind of large: N bits per cell per node.
 In the most paranoid case, if we used 256-bit HMACs on 3-node paths,
 that's 96 bytes per cell, which is more than 20% of the total length.
 But we can probably do better if we let the CREATE operation also
 tell the node some N to check. ÂFor example, the first node doesn't
 need to check any bits. ÂThe second and third nodes could check 64
 bits apiece; that only has 16 bytes overhead total, and high
 probability of catching any changes. (Birthday attacks don't matter
 here, and an attacker who mounts this attack for long enough to
 accidentally find a 64-bit MAC will break so many circuits in the
 process as to become totally unreliable.)
 All of this leaks the path lengths and position on the path to
 various nodes. ÂWe might open ourselves up to partitioning attacks if
 different clients choose different numbers of bits. ÂWhat's more, we
 might leak the length of the path to the last node by how much junk
 there is at the end of the cell. ÂSo we'd need to be careful!
 Here's a simple construction for this format, to be concrete:
  The CREATE operation's KDF produces the following outputs:
     Kf, IVf Â(stream cipher key and IV for forward direction)
     Kb, IVb Â(stream cipher key and IV for reverse direction)
     Mf    (MAC key for forward direction)
     Mb    (MAC key for reverse direction)
     SEEDf  Â(PRNG key for forward direction)
  And it also sets the following user-selected parameter:
     MACBYTESf (an integer between 0 and 32 inclusive)
     MACBYTESb (an integer between 0 and 32 inclusive)
     CANEXIT  (boolean: can we exit from this hop?)
  Let Kf[i], Mf[i], etc denote the parameter Kf, Mf, etc as shared
  between the client and the i'th node in its circuit.
  Relay cells sent towards the client have the following plaintext
  format:
    Body:
     Content:
      Relay Command [1 byte]
      StreamID   Â[2 bytes]
      Length    Â[2 bytes]
      Data     Â[Up to CELL_DATA_LEN-5-MACBYTESb bytes]
      Padding    [randomly generated as needed to fill the
             Âcell]
     MAC(All previous encrypted content + encrypted content,
                ÂMb)[:MACBYTESb]  [MACBYTESb bytes]
  The originator of the client-bound cell encrypts the content with
  the next part of its Kb,IVb stream, then appends the MAC.
  Non-clients receiving a client-bound relay cell encrypt the entire
  cell body, MAC included, with the next part of the stream cipher
  that was keyed with Kb,IVb.
  When the client receives a relay cell body, it iteratively does:
   For node i in circuit from 1..N:
     Let cells_i = all previous cells which we previously decided
      Âwere from node i, or relayed by node i,
     and let cellbody = the body of the cell, except for the last
      ÂMACBYTESb[i] bytes,
     and let cellmac = the last MACBYTESb[i] bytes of this cell.
     If cellmac is nonempty, check wither cellmac = mac_received,
     where mac_received is the first MACBYTESb[i] bytes of
     MAC(cells_i | cellbody, Mb[i]). If so, this cell is from node
     i.
     If this cell is from node i, add cellbody to cells_i, then
     decrypt cellbody using the stream keyed with Kb[i],IVb[i].
     Act on it as a relay cell.
     Otherwise add the entire cell to cells_i, and decrypt it, MAC
     included, with the stream keyed with Kb[i], IVb[i].
   If no node sent this cell: it's junk and somebody is probably
   messing with us! ÂDestroy the circuit.
  When the client *sends* a cell outbound to node N:
    Let cells[i] start at "" for all i in 1...N initially, and get
    updated as below.
    Let MACLEN = SUM(MACBYTESf[1...N])
    Let Body =
      Relay Command [1 byte]
      StreamID   Â[2 bytes]
      Length    Â[2 bytes]
      Data     Â[Up to CELL_DATA_LEN-5-MACLEN bytes]
      Padding    [randomly generated,
             ÂCELL_DATA_LEN-5-MACLEN-len(Data) bytes]
    Let PAD[i] = the next MACBYTESf[i] bytes from the PRNG keyed
    with SEEDf[i], for i in 1...N
    Let STREAM[i] = the next CELL_DATA_LEN bytes of
     the stream keyed by Kf[i],IV[i], for i in 1...N
    Let PADSEEN[1] == ""
    For i in 2...N:
      Let L = len(PADSEEN[i-1]) + len(PAD[i-1])
      Let PADSEEN[i] = (PADSEEN[i-1] | PAD[i-1]) xor
              ÂSTREAM[i-1][CELL_DATA_LEN-L:]
    For i in N down to 1:
     ÂLet Encbody = Body xor STREAM[i][:len(Body)]
     ÂLet extra = "RECOGNIZED" if i == N, "OK" otherwise
     ÂLet cells[i] = cells[i] | Body | PADSEEN[i]
     ÂLet M = MAC(cells[i] | extra , Mf[i])
     ÂLet Body = M[:MACBYTESf[i]] | EncBody
  To receive an outbound cell:
    Let M be the first MACBYTESf bytes of the cell, let REST be the
    rest of the cell, and let "cells" be all previous cells on this
    circuit. ÂIf CANEXIT, and M = MAC(cells|rest|"RECOGNIZED",
    Mb)[:MACBYTESf], and MACBYTESf > 0, this cell is for us. ÂIf M
    = MAC(cells|rest|"OK", Mb)[:MACBYTESf], this cell is not for
    us, but is valid. ÂOtherwise, destroy the circuit.
    Let PAD = the next MACBYTESf[i] bytes of the PRNG keyed with
    SEEDf, and decrypt REST | PAD using the stream cipher keyed
    with Kf,IVf. ÂIf this cell is for us, act on it as a relay
    cell. ÂOtherwise, relay it.
  ANOTHER VARIANT:
    If we restrict MACBYTESf values to range 0..HL/2, where HL is the
    length of the MAC output, we can replace
     MAC(x | "RECOGNIZED")[:MACBYTESf] and MAC(x | "OK")[:MACBYTESf]
    with
     MAC(x)[:MACBYTESf] and MAC(x)[HL-MACBYTESf:]
  PICKING MACBYTESf,MACBYTESb.
    We don't need to worry about birthday attacks:
     ÂBecause we're using a MAC, only the parties who are making
     Âthe MACs could try to do a brute-force search for a
     Âcollision, but they have no reason to do so.
     ÂIf a collision occurs accidentally, an adversary can't
     Âsubstitute an earlier-seen cell for a later one with the
     Âsame MAC, since the MAC covers not only the cell, but all
     Âprevious cells on the circuit.
    So 16 bytes is about the most we should ever do, given our
    usual security parameters. ÂLet me moot the number 8 for
    MACBYTESb.
    For outbound cells, for any hop we can exit from, choosing
    MACBYTESf=6 gets us the current security level. ÂFor the first
    hop, assuming we don't exit from it, choosing MACBYTESf=0 is
    totally safe, since the link crypto guarantees that nothing was
    corrupted on the way.
    In general, to prevent an end-to-end tagging attack, it seems
    sufficient to do something like setting MACBYTES=8 for the last
    hop, and MACBYTES=8 for one hop in the middle.
  OTHER VARIANTS:
    Can we combine this approach with one of the approaches in 2 or
    3 above to ensure that if corrupt data passes (because of our
    use of truncated HMACs) it still corrupts the stream?
    Can/should we use GCM or something here instead of separate
    encrypt/hmac operations? ÂIt doesn't seem that GCM per se would
    apply without some tweaking, which we probably do not have the
    expertise to do.
 ÂOVERHEAD NOTES:
    When computing additional overhead with this method, note that
    it lets us replace the old 4 byte "digest" field and the 2 byte
    "recognized" field.
    I note in passing that we need at most 9 bits for the length
    field, and at most 6 bits for the command field, yet we're using a
    total of 3 bytes for those 15 bits. ÂThat's an opportunity to
    save another byte.
ACKS
 Lots of the good ideas and concerns here are due to Robert Ransom.
 Michael Stone helped some with "relay option 4" above.
_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
--
"Those who would give up Essential Liberty to purchase a little Temporary Safety deserve neither Liberty nor Safety."
-- Benjamin Franklin
_______________________________________________
tor-dev mailing list
tor-dev@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev