[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Re: [Libevent-users] Cross-linked socket and openssl bufferevent throughput issue



On Fri, Mar 01, 2019 at 06:20:28PM +0300, Azat Khuzhin wrote:
> Hi!
> 
> > This appears to work however in testing the throughput of the overall system is very low
> 
> And "very low" is?
> What is the maximum throughput of the bufferevent in your case?

The exact throughput is somewhat difficult to measure but it's around 300
to 500 kbps through the system.  There are other factors here but the
overall architecture is easily capable of getting into many mbps in testing.

> > When examining the running process with strace I noticed that each event
> > loop itteration is limited to a 4096 byte read in spite of running with a
> > much larger setting in the bufferevent (I used the bufferevent_set_max*
> > functions to set maximums for read and write).  The 4096 limit appears to be
> > hard-coded in buffer.c, is that correct?
> 
> Indeed, I'm aware of it for around half a year.
> 
> The reason that it is not fixed yet is that:
> - personally I wanted to see how the memory fragmentation will work
> after increasing this buffer and provide some numbers in the patch

Good point.

> - I was waiting the patch from a guy that initially came across this
> 
> And also keep in mind that the problem in your case can be in openssl, due to:
> - CPU usage

CPU seems fine (32 core server is being used for testing and it's no where
near 100% usage on any core).

> - and that fact that openssl code sometimes reads lower chunks than it
> could (I have some examples, but they are only in mailbox)
> 
> Plus I just did some testing and here what I got with simple echo server:
> - plain bufferevent: 1GB/s
> - openssl bufferevent: ~400MB/s
> 
> (on intel i7 8550u)

That's interesting numbers and far higher than what we're seeing as
throughput but that may be because we're using 2 bufferevents and
cross-linking such that there're actually the openssl limits and the socket
based bufferevent limits in play.  I wonder if there's some odd dynamics
going on there (the network's capable and tested at gbps rates).

I used the code in sample/le-proxy.c as an example for how to do this
(although the actual code is slightly different it implements the same
approach).

> 
> Also note that this 4k limit *may not affect* openssl bufferevent, since:
> - openssl bufferevent that created with underlying bufferevent is
> still affected (due to it uses buffer of the bufferevent via BIO
> wrappers)
> - you need to set high watermark for bufferevent to overcome 4k limit
> there (doh)
> - for more details see bytes_to_read() in bufferevent_openssl.c

Ok, yeah I think I see what you mean.  bufferevent_openssl_socket_new is
being used so I think that means no underlying bufferevent.

> > If so is there any way to increase
> > the limit as it appears that with this limit the system is unable to
> > keep up with the amount of traffic (the strace output shows increasing
> > amounts of data available for reading).
> 
> Yep it is hard coded in buffer.c
> 
> > If not are there any other settings
> > I can alter in libevent to tune it for this use-case?
> 
> You can add an API for evbuffer to change the default size and create
> a pull request with your changes here [2]. Are you interested in this?
> Or should I?

I'm interested however I suspect you're best placed to make the patch.  I'm
certainly happy to help where possible though.

> 
>   [2]: https://github.com/libevent/libevent

Although, as I said, you're probably best placed to patch this one, are
there any guides to contributing to the project (i.e.  testing and coding requirements etc)
that I can look at?
***********************************************************************
To unsubscribe, send an e-mail to majordomo@xxxxxxxxxxxxx with
unsubscribe libevent-users    in the body.