[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [Libevent-users] Cross-event base interactions
Well, the earlier message got sent from the wrong account and bounced. However, it gives me an opportunity to update this further. Having spent a few more hours exploring it, I have found that the culprit may well have been some "performance enhancing" suggestions given to me by a colleague. Rebooting my systems (thus resetting those changes) results in slower performance - but I now see no impact at all from starting threads. Re-installing those changes results in the prior behavior.
Not sure I understand the connection, but at least it is reproducible...and nothing to do with libevent.
> I did some further debug on this, and I've convinced myself this isn't a libevent issue. By starting various threads, I've found that we get an added latency impact of about 80 nanoseconds/thread, even for threads that are confirmed blocked in select or poll. Enabling libevent thread support doesn't seem to have any impact at all - it's the existence of a thread that causes the issue.
>
> Best I can figure is that there is some impact via the Linux scheduler due to having the additional threads, but I'm not entirely sure how to confirm and/or measure that. Any thoughts would be appreciated, though I know it strays from libevent itself.
>
> Ralph
>
>
> On Nov 21, 2012, at 7:49 AM, Ralph Castain <ralph@xxxxxxxxxxx> wrote:
>
>>
>> On Nov 21, 2012, at 7:26 AM, Nick Mathewson <nickm@xxxxxxxxxxxxx> wrote:
>>
>>> On Tue, Nov 20, 2012 at 5:55 PM, Ralph Castain <rhc@xxxxxxxxxxxx> wrote:
>>>> Hi folks
>>>>
>>>> We have a case where we are running two parallel threads, each looping on their own event base (we have the libevent thread support enabled). Even though the two bases are running in separate threads, however, we see an impact on response time - i.e., if we run only one thread, events in that event base are serviced faster than when the other thread is in operation, even if no events in that thread are active.
>>>>
>>>> Just to be clear, the two cases are:
>>>>
>>>> Case 1
>>>> A single event base is created, and a single event loop is running. Two file descriptors are being monitored by separate events - when one file descriptor has data, that event is "activated" to handle the data. Data is only arriving at one file descriptor, so the other one is "quiet".
>>>>
>>>> Case 2
>>>> Two event bases are created, each being looped by an independent thread. Each base is monitoring a different file descriptor. Only one file descriptor (for the same base as above) is receiving data - the other base/thread is blocked in select. We see a measurable increase in the time it takes for the "active" event to be serviced when compared to Case 1 - the difference is roughly 20%.
>>>>
>>>> Is this cross-interaction expected? Any suggestions on how we might better separate the two channels?
>>>
>>> Mysterious!
>>>
>>> It's sure not expected that there would be that much of a drop-off.
>>
>> To be fair, I should have clarified the scale we are talking about. That 20% corresponds to 80 nanoseconds, so it isn't a huge amount of time in absolute terms. Still, for MPI purposes, that is of concern.
>>
>>> I'd try to look harder to diagnose what's going on. Stuff to look at
>>> would include:
>>>
>>> * What OS is this? You mentioned "select" which makes me think
>>> Windows, but you didn't actually say.
>>
>> It's CentOS 6.1
>>
>>> * Can you trace system calls in the thread that is supposed to be
>>> idle? Have you verified that it's really sleeping on select, or is
>>> that an inference? (It's a reasonable inference, mind you, but when
>>> debugging, we shouldn't trust any inferences without checking.)
>>
>> It's a good point - one of my compatriots raised it last night as well. I fear it was an inference and I'm beginning to believe it is wrong. I'll check to ensure we aren't polling.
>>
>>> * Can you profile the code, and find out which piece exactly is
>>> taking longer here?
>>
>> I'll do so once I verify we aren't actually polling.
>>>
>>> --
>>> Nick
>>> ***********************************************************************
>>> To unsubscribe, send an e-mail to majordomo@xxxxxxxxxxxxx with
>>> unsubscribe libevent-users in the body.
>>
>
***********************************************************************
To unsubscribe, send an e-mail to majordomo@xxxxxxxxxxxxx with
unsubscribe libevent-users in the body.