[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: POW / rPOW



On Thu, 23 Dec 2004 22:19:01 -0500, Nick Mathewson <nickm@xxxxxxxxxxxxx> wrote:
[snip]
> Anti-DoS has some promise, especially in the area of reducing the
> 'force multiplier' of a DoS attack.  (That's the ratio of network
> resources to the resources an attacker must spend to waste them.)
> Right now, the biggest force multiplier for a DoS attack in the
> Mixminion design is in the use of SSL for transferring data.  An
> attacker can force an SSL resonder (the Mixminion server in this case)
> to perform a comparatively expensive RSA decrypt at the expense of
> sending a few un-decryptable bytes of junk, with no encryption
> required.  But this could possibly be solvable by rate-limiting failed
> SSL handshakes, rather than requiring POW for each one.

Right, this was specifically the area I was interested in... The force
multiplier effect has several different angles, you point out one,
there might be another (for example a partial 'replay' attack where an
attacker computes message that goes through many hops and goes no
place, taking W work, then encrypts this internal bundle to every
entry point.. placing a much higher load on the network than on the
attacker).

> So in summary, if you have a _specific_ proposal of how stuff should
> work, sure, let's look at it.  But if you were asking, "have you
> considered proof-of-work systems?", the answer is "yes, but we haven't
> found a design we really believe in yet."

I don't, but I was wondering if they'd been considered: It's not worth
my time to reinvent ideas you've already pr oven as useless. :)   I'd
read up on mix systems a number of years ago when mixmaster came out..
but not really followed the art in the interim.
 
> thanks again for your interest.

I'm trying to get up to speed on these systems so I can make useful
suggestions...


So here is another idea for mix networks that I haven't seen that
might be useful:

It's pretty clear that padding doesn't work really well from singular
users because it only provides a strong data hiding benefit if it is
very consistent. It can only be expected to be consistent if it's
average volume is equal to it's peak volume, which for most single
users would be very bandwidth intensive.

However, because a server in a mix network will likely be used by many
users it's traffic patterns to other servers may be lest bursty...
this would reduce the cost of a padding system.  Has anyone discussed
the properties of a mix network which did the following:

--Data drizzling with remote queuing--
User submits a message to a mix server.  The mixserver has continuous
sessions to it's most frequently used neighbors. Each session runs at
continuous rate which is agreed by each end based on long term
transfer averages, and changed at a defined time. ..  As messages come
into the mix they are encrypted with a large block cipher. They are
assigned a queuing time (based on good mix practices) and immediately
queued for transmission to their next hop, their position in the queue
determined by how 'near to now' their mix time is (er maybe). Queues
which point to remote severs for which we have persistent padded
connections are continually drained, if we run out of message data, we
send random data. We do not wait until a message has reached it's
dequeue time to begin it's transmission. As soon as we've sent some
message data, we delete that data from our system.  The remote host
must store all the data we send it for a (say) equal time to our
normal maximum mix depth.   When the dequeue time runs out on a
message, we send the remote server the byte offsets of the beginning
and end of the encrypted message in the data we've been sending it
(each end keeps a perpetually running persistent byte counter that
just wraps at 2^32 bytes, or whatever), and the key.  Until that
moment, the remote server (and anyone else for that matter) is unable
to tell if the data we've been sending is message data or padding.

This has a number of neat properties, ... most importantly, we can
hide the high latency of a slow transmission (where average bandwidth
= peak bandwidth) with the expected and acceptable mix delay. 
Furthermore: messages very rapidly become split between two servers,
so that any message that was 'in transmission' is unreadable to an
attacker unless he compromises both hosts.  This reduces the mixes
pool of not-yet-dequed messages as a desirable target. Further:
interleaving could be added to enhance this propriety, or additional 
cryptographic passes with data as it's dequeued (i.e. so that as soon
as a single packet leaves a queue everything that was in that queue at
that moment is unreadable to anyone except the remote end.. Not sure
that it would really be worthwhile.. I think the local spool is not a
high profile target in current mix systems anyways)..

This general idea could be extended in a number of ways, going as far
as creating a realtime anonymous subnetwork underlying mixmaster
servers where the servers that may not have enough traffic to form
direct drizzled links (the threshold is that the burstyness of the
traffic must be low enough that the increased latency from peak =
average transmission is hidable in normal mix delay), can form 
drizzle rings where data is distributed to all the servers in a ring..
(this works in low traffic server groups where the sum of each servers
inputs and outputs are enough to drizzle, just not their traffic to
other singular servers).

Has this area been explored already?