[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: gettimeofday() and clock



Mads Bondo Dydensborg wrote:
> On Mon, 2 Sep 2002, Steve Baker wrote:

>>but in practice it can't wake
>>your program up any faster than the kernel timeslice - which is 1/50th 
>>second.
> 
> 
> is wrong. Not the first part, only the latter, and this may only be
> because I misread it (I answer these mails too late in the evening,
> English is not my native language, I tend to misunderstand the core of a
> thread sometimes). I believe the kernel _can_ wake you up in HZ of a
> second, which is why the original poster saw sleeps of 10 seconds + 10 ms.
> 
> You are investigating something sligthly different though; what is the
> minimal delay to be _put to sleep_ and _waken up_.  This may very well be 
> 20 ms on Intel - at least if you do not want to eat up the cycles.

I'm getting very confused.  I presumed that calling 'usleep' (or 'sleep)
causes the following train of events:

   1) The current process gives up the remainder of it's timeslice,
      which the kernel immediately gives to the next most deserving
      process...or halt's the CPU to save power if there is nothing
      to run.

   2) If that process doesn't give up the CPU (or if we halt'ed), then at
      the next (100Hz maybe) timer interrupt, the kernel forcably takes
      control and examines the list of processes that want to run.  Since
      my process's 'usleep' timer has expired *many* milliseconds ago, it's
      again eligible to be run...so...

    3) My process should wake up and continue to run.

Since my little test program consumes almost zero CPU time, and
immediately goes back for a 1ms sleep, that *ought* to mean that
on an idle system, it gets awoken every 10ms.

However, (at least on my system), it only wakes up every 20ms...well,
19.9ms or so.

That would be consistent with a 50Hz kernel rate - but if the kernel
really wakes up at 100Hz, then we have to ask *WHY* it didn't restart
my little process as close as possible to it's requested sleep period,
but instead waited until the FOLLOWING timeslice in order to do that.

> I am quite sure you can not _reliably_ sleep for less than
> something-in-the-ballpark-of 200ms, actually, depending on your hardware.

I've tested this *very* carefully at work.

If there are other processes running - then of course all bets are off - and
even if the machine is totally free of user-initiated processes, there are still
*rare* occasions where other processes might bump you out.

However, in a generally quiet machine, you can run for a minute or more with
solidly consistent 20ms usleep times.  That's "good enough" for games.

With more care (such as in the embedded systems I work on), you can turn off
*all* annoying background processes.  We had a system running for an entire
weekend without missing a single 20ms tick.  However, we *never* see 10ms
sleeps unless the process had been running for >10ms before it slept.

>>Maybe it's time for us games/graphics types to hang out on the kernel
>>mailing list and lobby for a higher rate.
>>
>>Whatever the rate is, it's been the same since 33MHz 386's - and now
>>we are close to having 3.3GHz CPU's (100 times faster), asking for a mere
>>10x speedup in the kernel's update rate seems a rather modest request.
> 
> 
> Not only that, with increased caches and memories, the cost of switching 
> contexts should have gone down (although I am unsure about the virtual 
> page tables).

Yes.

>>The impact on the ease of writing games and other graphics-related
>>packages would be significant.
> 
> 
> But Linux would still be a soft realtime system though...

Yes - but for games, 'soft' realtime is 'good enough'.

----------------------------- Steve Baker -------------------------------
Mail : <sjbaker1@airmail.net>   WorkMail: <sjbaker@link.com>
URLs : http://www.sjbaker.org
        http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net
        http://prettypoly.sf.net http://freeglut.sf.net
        http://toobular.sf.net   http://lodestone.sf.net