[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: gettimeofday() and clock



Gregor Mückl wrote:
> Steve Baker wrote:

> My game engine indirectly proves that time slices are 10ms on kernel 
> 2.4.0. I should perhaps mention that it is multithreaded. The rendering 
> frame measures the time it needs to render the the scene. Currently 
> there is only one small test scene with about 10 polys (and that on a 
> GF3 - don't know what I bought it for ;) which renders in about 1ms. But 
>  from time to time it measures a time of 10 or 11 ms. I'm not giving up 
> parts of my time slices, though.

Hmmm - OK - I guess that settles the 'kernel update rate' question - but
it still doesn't explain why my little test program doesn't manage to
usleep for less than ~20ms.

> The reason why time slices seem to be around 20ms probably is that 
> processor time is deliberately given up. When I remember the scheduling 
> algorithm correctly this might lead a lower priority for this process 
> because it already had the last time slice and it has given up part of 
> it (i.e. doesn't seem to have much to do at the moment, which would be a 
> correct assumption for a program waiting for data to arrive).

I'm not 'up' on what the Linux kernel does - but that would be exactly
the opposite of the original UNIX algorithm.

UNIX would say "If you give up your timeslice early - then you are a
good, friendly kind of application who will probably give up the CPU
again next time - so I'll improve your process priority a little."
Processes that hogged the CPU for an entire time-slice would then be
penalised slightly.

This guaranteed that heavy compute processes would be pushed further
into background - and processes that were blocking on I/O (like word
processors or interactive shells) would remain responsive.

So, my little test application (which is ultra-friendly and gives up
it's timeslot as soon as it gets it) should be running at a good
priority level.

>> I also believe the 50Hz figure because the *original* UNIX on PDP-11's 
>> used
>> 50Hz - and I presumed that Linus picked the same number for 
>> compatibility.
> 
> This statement about compatibility doesn't make any sense in my eyes.

No?  OK - well, whatever...it made sense to me.

> This program really gives up two time slices here whenever usleep() is 
> called. See above for possible reasons.

Why?  I don't understand your reasoning.

In what possible way could I change it to only give up one slice?

>> I'd *really* like to see a shorter timeslice on Intel.  With
>> 2GHz CPU's, even 100Hz is like an eternity.
>>
> 
> But it should be sufficient with target frame rates of 20-30fps. Any 
> framerate higher than 30fps does not lead to percieveably better 
> rendering of *motion*.

That's nonsense.  I'm a professional graphics programmer and I can
absolutely assure you that the difference between 30Hz and 60Hz is
like day and night.  In the flight simulation business, we couldn't
sell a simulator that ran graphics at 30Hz.  We havn't sold a 30Hz
system for 10 years.

At 30Hz, each image is drawn twice onto the faceplate of the CRT.
Your eye/brain interpolates the position of the images it sees and
the only way to rationalize an object that moves rapidly for 1/60th
second and then stops for 1/60th second is to presume that there are
really *TWO* objects moving along right behind one another.  This
double-imaging artifact is really unpleasant in any kind of fast-moving
application.

> The human eye just isn't fast enough for that. 

Not so.  You can percieve motion up to around 70 to 120Hz - depending
on the individual, the room lighting conditions and whether the image
is centered on your eye or viewed with peripheral vision.  That's why
higher quality graphics card/CRT combo's are run at 72 or 76 Hertz.

> Anything else is red herring. During the development of the IMAX cinema 
> format research was concucted into this area with the result that higher 
> framerates than the one IMAX uses for the films are plain waste of 
> celluloid.

Uh - uh - bad analogy.

That's because movie film has built-in motion blur that performs a
degree of 'temporal antialiasing'.  Graphics systems have (in effect)
an infinitely fast 'shutter speed' - film cameras (and TV cameras for
that matter) do not.

> But why can we see a difference in monitor refresh rates of 60Hz and 
> 100Hz? The answer is that this is flickering, i.e. changes in light 
> intensity, and no colour changes (these carry most of the motion 
> information). You cannot percieve the monitor flickering if you let a 
> small white rectangle move on an otherwise black screen. If the 
> rectangle rested still the flickering would become visible.

The opposite is true.  If the image is stationary, there is *only* flicker
to consider - which can be 'fixed' with longer persistance phosphors, etc.

For a moving image, the double-imaging effect at 30Hz is very pronounced.

> Cinema projectors use a little trick to prevent flickering: They usually 
> process celluloid at a rate of 24 frames/sec. But the shutter opens and 
> closes not only once per frame as you would probably expect, but 
> actually opens twice or thrice to simulate a screen refresh rate of 48Hz 
> or 72Hz depending on the projector.

Yes - but fast action sequences on traditional (non-IMAX) frame rates
**SUCK**.  And in any case (as I already explained) the motion-blur
inherent in the nature of a film camera magically gets rid of the
problem.

Doing generalized motion blur in interactive computer graphics is an
unsolved problem.

>> A 1000Hz kernel timeslice - and (in consequence) a usleep that was
>> accurate to ~1ms instead of ~20ms - would solve a *TON* of problems
>> for graphics programs that want to run at 60Hz and yet still use
>> 'usleep' to avoid blocking the CPU.
>>
> 
> Why do you want to run at 60Hz at all cost?

I think I've explained that.

I can absolutely assure you that 60Hz is the MINIMUM frame rate I'd
consider for most applications.  76Hz would be preferable.

> Oh, and did I mention that increasing the scheduling frequency from 
> 100Hz to 1kHz causes the scheduler to use ten times as much processor 
> power?

Sure - but we have 100 times more power than we did when Linux first
ran on PC's - so using 10 times more shouldn't be a huge problem.  Morover,
that fraction is decreasing with Moore's law.

The alternative is a bunch of applications using nanosleep instead of usleep
and burning CPU time for no gain...I think that's worse because unless
the kernel improves, the percentage of wasted CPU time will stay the same
no matter what speed the CPU runs at.

----------------------------- Steve Baker -------------------------------
Mail : <sjbaker1@airmail.net>   WorkMail: <sjbaker@link.com>
URLs : http://www.sjbaker.org
        http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net
        http://prettypoly.sf.net http://freeglut.sf.net
        http://toobular.sf.net   http://lodestone.sf.net