>> Let's suppose my app has to serve about 1K TCP connections. Since
>> the app has, let's say, 10 different feeds to broadcast to different
>> groups of connections (which any group can have any connection id,
>> even overlapped connection id's on different groups; where each
>> group may serve a unique feed).
>Your question confused me at first, since my first reaction was to
>think you meant 'broadcast' in the sense of IP unicast/multicast/
>broadcast, and say, "But TCP
doesn't support broadcast addresses!"
>Instead, I _think_ you mean "send the same content on a large number
>of TCP connections".
Yes. You are right, that was what I meant to say. In a more detailed description: send the same content (say a buffer/memory region of data) to a finite number of "registered" fd's (what i call connection ids).
Sorry my English sucks badly...
>> 1) Does libevent 2.0 has the possibility to broadcast (in TCP terms)
>> a chunk of data to all groups of connection ids with only
>> _one_single_ libevent system call (let's say a zero-copy like) ?
>I'm not quite sure what mean by a "libevent system call". I'm going
>to assume that what you want to do is minimize the number of data
>copies and the number of system calls made to the kernel by libevent.
>Calls _to_ libevent are not system calls, since libevent isn't part of
>the
kernel.
Again, my bad ;_( sorry...
I was meaning a single/special libevent function call (from libevent API) in a way to minimize data copies due to, in this case, issuing X write/send system calls to the finite number of "registered" fd's (like explained above).
But, I've realized that libevent should not be responsible for this kind of functionality, instead, the OS network stack should be giving me this kind of functionality (i also don't know any Linux feature that cover this requirements - it would be a great idea in having such a good functionality on Linux/Unix land).
>I'll assume you're using bufferevents, since you're talking about TCP
>abstractions in Libevent 2.0.
>The short answer is "Not in the way you're asking for." There will be
>at least one system call from Libevent to the kernel per fd that
>you're writing to. I don't know of a kernel interface that
we could
>wrap in order to get fewer than one call per fd myself. (If anybody
>knows of some fancy "write this data to the following array of fds"
>syscall out there, please let me know.)
Well, since I'm a newbie in using libevent and programming in general IO on event based loops, i'm right now reading the libevent reference book (what a piece of book! Congrats!). But, since i didn't see anything (any feature) like that one i was interested; i thought that maybe bufferevent could have something, in a more low level, that could help me, at least, minimizing my data copies between my app's buffers and the kernel network stack.
That said, here are some ways you can minimize data copying:
>If your content is in a file, you can use the evbuffer_add_file()
>function to use your system's mmap/sendfile/splice capabilities as
>warranted. In this case, no data should need to be copied
from
>user memory to kernel memory, and it's up to the kernel to decide how
>much copying it wants to do in its network stack.
Yes, indeed, splice syscall would be great if my content was located mainly in files. But the feeds i want to "broadcast" to subscribers has a dynamic fashion instead of a static fashion. So this is why i must be playing around buffers every-time.
>This interface isn't optimized for sending many copies or extents from
>a single file, but it shouldn't be too bad for that; it would be nice
>to have one that was.
Yeah! But looking around, I've found a lot of possibilities: PGM (http://developer.novell.com/wiki/index.php/OpenPGM), SCTP (http://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol),
etc.How does libevent handles this kind of protocols (reliable multicast) events? Is it compatible with their behavior?
>If your content is bytes in RAM that you don't want to copy, you'd use
>the evbuffer_add_reference() interface to make sure that Libevent
>doesn't make any internal copies of your data before passing it to the
>kernel. You need to provide a cleanup function when using this
>interface, so that Libevent can tell your application once it is no
>longer using the memory.
>> If not, is there any abstract API in libevent core that I could
>> implement my broadcast behavior?
>You'd probably want to have a look at how bufferevents and evbuffers
>look now, and see what you want to do with them.
Thank for the tips! I'll be reading in more detail bufferevents and evbuffers API.
>> 2) In positive case, would i have to use
different event_base loop
>> for each group of connection ids (each group is a feed of multimedia
>> data) or libevent would allow me to create different groups inside a
>> same event_base loop?
>I see no reason you'd need multiple event bases for this.
Yes, after reading in more detail your lebook, maybe I could just make a 'union' of all possible fd's i'm interested in listening to their events in a single event base.
The other issue is how load balance the write/read loads across a scalable manner...
>Like I said above, I don't know of any system call provided by any
>kernel that supports writing to large numbers of fds at once in a way
>that would be useful to you---but I'm no expert on kernel esoterica,
>and there might well be one that I haven't heard of. If some OS
>supports this, I agree that it would be swell to have Libevent able to
>take
advantage.
Thank you very much for the effort and time in answering my questions.
Regards,
Raine