[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [Libevent-users] Strange timeout scheduling in 1.4
Interesting.
We have seen problems in the past with gettimeofday() not being very
deterministic on Linux, with occasional spikes in the time it takes,
although I don't have the numbers to hand (50-100 microseconds or so are
the numbers I remember offhand).
If this is to be used as a basis for what libevent should do, I think it
is worth bearing in mind that as well as varying support on different
architectures, people may be using older hardware (without, for example,
invariant TSC) or older kernels (RHEL 5.3, for example, has 2.6.18).
I'm not surprised OpenBSD is much slower though for this though.
On Wed, Jul 27, 2011 at 09:49:08PM -0700, William Ahern wrote:
> On Wed, Jul 27, 2011 at 11:00:45PM -0400, Nick Mathewson wrote:
> > On Wed, Jul 27, 2011 at 10:35 PM, William Ahern
> > <william@xxxxxxxxxxxxxxxxxx> wrote:
> >
> > If you happen to know, is it the same story with clock_gettime()
> > performance? I ask because Libevent uses that function in preference
> > to gettimeofday() when it's available.
>
> I quickly ran some tests. Results:
>
> OpenBSD AMD64 4.8: No difference between gettimeofday and clock_gettime.
> Both just end up calling bintime() in kern_tc.c. Crazy slow.
>
> Linux x86_64 2.6.35: No difference. In the 3.0 tree gettimeofday() and
> CLOCK_REALTIME both just call do_realtime() in
> arch/x86/vdso/vclock_gettime.c. CLOCK_MONOTONIC calls do_monotonic() which
> is similar code.
>
> Linux x86_64 2.6.30: No difference
>
> Linux i686 2.6.30: gettimeofday() was slightly faster than clock_gettime(),
> but I had to crank the iterations up to 20M to really see it. No difference
> between CLOCK_REALTIME and CLOCK_MONOTONIC. All of this code seems to reside
> in kernel/time/ and kernel/posix-timers.c in the 3.0 tree. clock_gettime()
> calls a bunch of function pointers but it _seems_ that it probably ends up
> calling getnstimeofday() like gettimeofday(), so the cost difference might
> just be because of the indirection of the POSIX timers implementation.
>
> OS X 10.8.0: gettimeofday() as fast as on x86_64 Linux, with all the time
> spent in userspace, so they must be doing something similar with reading a
> mapped, cached value and offsetting with the CPU TSC.
>
>
> Here's my code. Linux requires -lrt to find clock_getttime(). Usage:
>
> time ./gtod -i 10M [gtod|rt|mt]
>
> /* gtod.c */
> #include <stdlib.h>
>
> #include <string.h>
>
> #include <time.h>
>
> #include <sys/time.h>
>
> #include <unistd.h>
>
> #include <err.h>
>
>
> int main(int argc, char *argv[]) {
> extern char *optarg;
> extern int optind;
> int opt;
> struct timeval tv;
> struct timespec ts;
> unsigned i = 1U<<20;
> const char *mode;
>
> while (-1 != (opt = getopt(argc, argv, "i:"))) {
> switch (opt) {
> case 'i':
> i = 0;
>
> for (; *optarg; optarg++) {
> switch (*optarg) {
> case '0' ... '9':
> i *= 10;
> i += *optarg - '0';
>
> break;
> case 'M': case 'm':
> i *= 1U<<20;
>
> break;
> case 'K': case 'k':
> i *= 1U<<10;
>
> break;
> }
> }
>
> break;
> }
> }
>
> mode = (argv[optind])? : "gtod";
>
> if (!strcmp(mode, "gtod")) {
> while (i--) {
> gettimeofday(&tv, 0);
> __asm__("");
> }
> #if defined(CLOCK_REALTIME)
> } else if (!strcmp(mode, "rt")) {
> while (i--) {
> clock_gettime(CLOCK_REALTIME, &ts);
> __asm__("");
> }
> #endif
> #if defined(CLOCK_MONOTONIC)
> } else if (!strcmp(mode, "mt")) {
> while (i--) {
> clock_gettime(CLOCK_MONOTONIC, &ts);
> __asm__("");
> }
> #endif
> } else {
> errx(EXIT_FAILURE, "%s: unknown mode", mode);
> }
>
> return 0;
> } /* main() */
>
> ***********************************************************************
> To unsubscribe, send an e-mail to majordomo@xxxxxxxxxxxxx with
> unsubscribe libevent-users in the body.
***********************************************************************
To unsubscribe, send an e-mail to majordomo@xxxxxxxxxxxxx with
unsubscribe libevent-users in the body.