[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-talk] Cryptographic social networking project



On Fri, Jan 09, 2015 at 12:13:11AM +0000, contact@xxxxxxxxxxxxx wrote:
> i think you got it all wrong, maybe it's because English is not my
> native language. let me demonstrate it by technical details

It may be the language, but I feel you are talking about completely
different things than the ones I pointed out. Feel free to throw all
your estimates at me, but it doesn't make me think you solved the
systemic problem of addressing an exponential challenge with a linear
solution. You summed up bandwidth as if the useless redundancy of 
the content on the network was irrelevant. I believe it adds up and
in all cases I saw where this was underestimated (round robin unicast
distribution by HTTP or XMPP for example) it has subsequently failed
to scale.

> In an undirected graph number of edges=(vertices)*(degree)/2 so in our
> network there are <835 million connections but there aren't 835 million
> onion circuits between users. Each user for handling hidden services
> only have two regular circuit that for 10 million users sum of them
> would become 20 million. Thus Alice only have two 3hop circuit and Bob
> have two 3hop circuit, there is no overheat here compared to what Tor

That sounds completely different to the architecture you described
on the website. You say Alice no longer establishes circuits to each
friend in order to deliver notifications? Then how do the notifications
travel. They all go through a central cloud service? No, you mean
something else...

> users generally do for browsing websites securely by Tor browser bundle
> hence I'm not going to calculate cost of maintaining these regular
> circuits. The sender circuit (SC) is used for sending Notifications to
> friend hidden services, the receiver circuit (RC) is used as the hidden
> service itself to receive Notifications from all friends. In RC, third
> hop is called rendezvous point (RP). Alice in order to send a
> Notification to Bob, need find out what is his RP plus some additional
> information to send him packets through RP. 

So Alice reopens circuits to each of her friends all of the time?
In order to deliver a notification she does 167 circuits - sequentially,
thus with a tangible latency?

> In hybrid hidden services there is no need for asymmetric key agreements
> to establish a secure channel between SC and RC, also I dismiss
> calculating cost of symmetric cryptography on packets as it's trivial
> using regular block ciphers so I won't estimate CPU work required by ORs
> to handle hidden services (check djb's benchmarks for aes_128 at
> cr.yp.to). All informations needed to exchange Notifications securely at
> RPs, is delivered from CommonSecret and SharedSecret. 

Nothing of this explains how you avoid maintaining 167 circuits just
for Alice instead of efficiently delivering one message from Alice
to 167 people, using a suitable distribution tree network. It's like
I'm talking apples and you answer cucumbers.

> Bob select his RPs from directory's snapshot in time intervals between
> 10minutes-12hours after beginning of each day at 00:00 UTC. Time
> interval is delivered from V_1=H(CommonSecret||mm/dd/year||EpochCounter)
> where EpochCounter is a natural number starting from 1 to n that reset
> to 1 again at 00:00 UTC in next day and row number for RP in directory's
> snapshot is delivered from
> V_2=H(H(CommonSecret||mm/dd/year||EpochCounter)). Bob to generate Time
> interval, spin a wheel by V_1 that has 42600 slots and encode where it
> stops into waiting time between 10 minutes to 12 hours. To generate row
> number for each epoch's RP, he spin a wheel by V_2 that has n slots
> (n=number of available ORs in directory's snapshot) and use where it
> stops as RP's row number. if row number for RP is for instance 3907, Bob
> connect to OR #3907 #3908 #3909 and keep these RPs open to make sure if
> Alice failed send her Notification to #3907 then she can try other RPs. 

Are you explaining how the hidden service DHT works? Why?

> Bob start opening RPs from 00:00 UTC, wait for generated time interval
> and use a higher epoch counter to determine what is next RP and how much
> he should stay there again by generated time interval. Hence total
> numbers of epochs is different everyday for each person. 
> 
> When Alice know what is Bob's RP, she don't send anything to it until
> she have a new Notification for him. She sends packets as
> {CircuitID|Payload}over Http from her SC without establishing a TLS
> channel with RP. CircuitID= first 4 byte of

You are using a circuit to talk to a rendezvous point without actually
establishing a circuit? Now you are venturing into details of Tor
protocol that I am not familiar with. If Tor let's you optimize some
things here that is cool, but that still doesn't make a distribution tree.

> H(CommonSecret||mm/dd/year||EpochCounter||GenerateCommonID), payload is
> cipher-text of {cookie|Notification} encrypted by RP_KEY which is
> H(CommonSecret||SharedSecret||mm/dd/year||EpochCounter||GenerateKey),
> cookie is (cookie1)â(cookie2). Bob when open an RP, tells all different
> cookies for all his 167 friends to RP (for each friend there is a
> different cookie1 and cookie2 value in each epoch), cookie1= first 4
> byte of

The rendez-vous point maintains state for Bob? Is that an extension
you are proposing to the Tor protocol? So the RP is in charge of
actually distributing the message? That may not be sufficient, but
it is certainly better than doing the round robin from the sender's node.

> H(CommonSecret||SharedSecret||mm/dd/year||EpochCounter||GenerateCookie1)
> and cookie2= first 4 byte of
> H(CommonSecret||SharedSecret||mm/dd/year||EpochCounter||GenerateCookie2).
> When Alice gives {cookie|Notification} to RP, if
> (cookie)=(cookie1)â(cookie2), RP send the packet to Bob, then RP OR in
> its RAM replace (cookie2) with (H(cookie2). When Alice want to send
> another Notification to Bob using same RP again, for (cookie) she have
> to send (cookie1)â(H(cookie2)). Next Notification need
> (cookie1)â(H(H(cookie2))) as cookie and so on. 

This sounds like a ratchet mechanism. Nothing to do with scalability
or am I misunderstanding you?

> Let say each packet is approximately 60 byte and Alice sends 50
> Notifications to all her friends each day. Thus Alice sends 50*60*167
> byte to all her friends that sending them via her 3hop SC to each
> friend's 3hop RC will increase the total amount 6x time more. Therefore
> Alice everyday sends 3 MB through ORs in order to deliver Notifications
> for different purposes to all her friends. If 10 million users send same
> amount of data to their friends, it will cost 30 TB data exchange for
> onion network. 

Bandwidth isn't the answer. Not even Google, Facebook or Twitter solve
the distribution problem by bandwidth, even if they are the ones that
could afford to cheat this way the most. Yet, they have distribution networks.

If Alice sends 167 individual notifications, she is not distributing.
Distribution should happen as close as possible to the recipient.
If a guard node receives a notification and distributes it to 24 final
recipients that have subscribed for it, that is an efficient distribution
plan. It implies relay nodes maintaining state about distribution trees
and a different plan for achieving anonymity.

> PseudonymousServer: public container for hosting blocks have 100%
> efficiency. If each user everyday send/receive 10 MB data
> (reading/posting) to/from PseudonymousServer, the total amount of
> traffic for 10 million users would be 100 TB each day that based on our
> threat model has to be routed through onion network but this is linear
> traffic not an exponential effect, for instance if on Twitter.com each
> user approximately download/upload 10MB data from/to Twitter.com servers
> everyday, for 10 million users it would require exact same amount of
> traffic (100*3 TB) to be routed through the onion network if they use
> Tor browser bundle to access Twitter.com 

To this part you can apply cloud technology, so that should work.

> >Our plan is completely different from what you write here. Pubsub
> >distribution channels operate over the backbone, not the individual
> >friend systems. It is the backbone ensuring that everyone gets a copy
> >of the message she is supposed to get and the subscribers may not know
> >of each other - who they are, how many they are. I don't know why you
> >assume you can judge what we have been working on in the last decade,
> >then talk about things that have nothing to do with us.
> 
> Now I did a search on your website and i'm not exactly sure what is it.
> what I found seems to be an experimental mesh network. You criticized

The only place our website mentions mesh networking is on the censorship
page. There it says 
  "GNUnet provides an <a href="https://gnunet.org/wlanworks";>ad-hoc mesh networking</a> transport. Secure Share plans to use it in parallel with traditional Internet."

> Tor because when a global adversary monitors both entry+exit nodes in a
> circuit, metadata is compromised. In a mesh network (if friends are

Monitoring is not enough, the attacker needs to be able to shape the 
traffic - possibly quite aggressively. I don't know to how many people 
such an attack would scale. We need another Snowden to tell us how things
are moving forward with the 5 eyes technologies. We may hope for the best 
but we should develop for the worst.

> using each other as mesh routers) even a local adversary by monitoring
> any part of network can compromise metadata for that part. Breaking

Both your assumption (friends = routers) and your deduction are wrong.
You may want to watch some gnunet videos on http://youbroketheinternet.org

> onion routing need 2 point of failure but breaking mesh network only
> need 1 point of failure. If you employ high delays, padding etc for more

GNUnet's onion routing is in planning stage. We're doing this because
Tor isn't suited for it and last time I asked I had the impression
keeping pubsub state on each relay node isn't a welcome idea on Tor.
I also like the way GNUnet can replace the existing Internet IP routing.

> security, then why not apply same defense on a parallel onion network
> managed by a comprehensive organization like Tor inc? 
> 
> In mesh networks when a node route someone else's traffic to
> destination, it makes traffic analysis for an observer harder as they
> can't detect it's from node itself or someone else but exact same
> property imply on onion routing networks either, if user run an onion
> router then it become harder for an observer detect intercepted traffic
> belongs to user itself or someone else behind it. Onion routing is
> already implemented, widely adopted, heavily supported and foils various
> types of more traffic analysis attacks that mesh networks can't. 

Yes, but it doesn't provide tree distribution.

> By the way it's cool to replace ISPs with mesh networks to reduce radius
> of connection between identities and make dragnet SIGINT more difficult,
> for instance when I send a TCP packet from my home IP address in Iran to
> a Tor entryGuard located in iceland, GCHQ really collect metadata for my
> connection by intercepting Iran's optic fibers at Oman sea and probably
> deanonymize my Tor circuit if they are controlling my selected Tor exit
> node in Japan too. But Internet backbones are beyond application
> developers scope, it's up to societies. 

Complete change of topic? Yes, mesh networks may have a hard time
finding adoption as long as governments do not endorse them.
Stuff that has no business model usually has a hard time in capitalism.
Should too many phones start doing mesh networking, the telecoms will
start selling phones with mesh networking disabled. Maybe they can even
send a suitable OS upgrade to deployed phones. The freedom to mesh may
have to be defended by legislation.

> >You just described another one of the good reasons why Tor isn't the
> >appropriate tool for the job we want to get done. Low latency is a
> >client/server-paradigm requirement that unnecessarily reduces the
> >anonymity for the use case of a distributed social network.
> 
> Our assumption is that anonymity works and when users retrieve something
> from PseudonymousServer via Tor, server can't recognize requests coming
> outside the exit node are from whom, for instance if Alice retrieve
> block1 then retrieve block2 from same exit node, we assume server can't
> recognize these retrievals are from same person as many others are using
> same exit node to retrieve blocks and majority of exit nodes are not
> concluding with attacker in same time. This threat model isn't perfect

That part of the model doesn't seem to be a problem with me, although
I don't like centralization very much.

> nor broken. If we decide not to do that, there is no alternative
> solution. High latency networks might cause deanonymization harder but
> if they are practical enough, I'm sure Tor network can easily add delays
> by writing few lines code for those who want it and if they do that in
> the future we can easily adopt it. The only other solution that makes

Yes, I was pointed to "alpha mixing" - a brilliant paper that has been
lying around unimplemented since 2006. Also noticed some recent
conversations on the topic that my grep for traffic shaping failed to find.
Tor should focus on financing these developments ASAP.

> deanonymizing connection between Alice and Bob really hard, is using a
> PIR protocol by homomorphic encryption to ask Alice put something on a
> database and then Bob later on query the database to pick up her packet
> without telling server what is his query or what server should in
> response give to him! But problem with such a PIR protocol is that for
> 10 million users, service provider have to pay billions of dollars to
> cloud hostings every month for computing astronomical cryptographic
> functions. Another PIR protocol that don't need cryptographically
> massage all records in database to guess output, is asking Alice to put
> something on database and Bob later download all records from database
> to locally choose which record is for him and delete the rest of
> unwanted outputs. But problem with such a PIR protocol is that database
> everyday become larger and larger thus users have to download more and
> more data from it next days which eventually paralyze the Internet. 

Beyond my area of competence here. I love when cryptographers find
brilliant solutions I couldn't have come up with, but I haven't seen
one for mass distribution of data yet. Maybe we should do social
networking protocols over FM antenna or cable TV.


-- 
	    http://youbroketheinternet.org
 ircs://psyced.org/youbroketheinternet
-- 
tor-talk mailing list - tor-talk@xxxxxxxxxxxxxxxxxxxx
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk