[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: percieving fast motion (was: Re: gettimeofday() and clock)



Steve Baker wrote:
> Gregor Mückl wrote:
> 
>> This program really gives up two time slices here whenever usleep() is 
>> called. See above for possible reasons.
> 
> 
> Why?  I don't understand your reasoning.
> 

The Linux scheduler uses a strange algorithm: every time the scheduler 
is run processes which can be run and do not get this time slice get 
their priority increased. The process which gets the time slice is 
decreased in priority. The process with the highest priority (before 
priorities are updated, of course ;) gets the next time slice. Nice'd 
processes have a fixed offset added to their priority, which AFAIK 
actually is the value that is passed to the nice() system call.

And the point here is that the scheduler doesn't seem to respect whether 
the time slice is given up early (note that the scheduler code itself is 
an awful to look at and I haven't examined it too closely). So the 
process priority goes down no matter whether the time slice is given up 
or not.

> In what possible way could I change it to only give up one slice?
> 

The only way I see is to actually do busy waiting - although it is bad, 
bad, bad! Reason is that no matter whether you call sleep() or usleep(), 
  the nanosleep() syscall is invoked. And it has this bug, which is 
documented.

>>> I'd *really* like to see a shorter timeslice on Intel.  With
>>> 2GHz CPU's, even 100Hz is like an eternity.
>>>
>>
>> But it should be sufficient with target frame rates of 20-30fps. Any 
>> framerate higher than 30fps does not lead to percieveably better 
>> rendering of *motion*.
> 
> 
> That's nonsense.  I'm a professional graphics programmer and I can
> absolutely assure you that the difference between 30Hz and 60Hz is
> like day and night.  In the flight simulation business, we couldn't
> sell a simulator that ran graphics at 30Hz.  We havn't sold a 30Hz
> system for 10 years.
> 
> At 30Hz, each image is drawn twice onto the faceplate of the CRT.
> Your eye/brain interpolates the position of the images it sees and
> the only way to rationalize an object that moves rapidly for 1/60th
> second and then stops for 1/60th second is to presume that there are
> really *TWO* objects moving along right behind one another.  This
> double-imaging artifact is really unpleasant in any kind of fast-moving
> application.
> 

Well. One point is wrong here: The human eye cannot recognise that 
motion is rendered jerky because the same frame is drawn during two 
refreshes. It is too slow for that.

What happens is something different: At the specified frame rate you 
generate a sequence of frames that get drawn only by one screen refresh. 
The human eye cannot distinguish between these frames, but percieves a 
series of them layed over each other. Imagine that you would alpha-blend 
your frames one over the other. This is the picture the human eye sees. 
So there is a difference at a higher frame rate: It sees the same object 
more than once, but each occurence with a lower itensity. In other words 
you are creating some sort of motion blur for the human eye - if the 
object moves slow enough. If it exceeds a certain speed (depending on 
object size) the double-imaging artifact will appear as the human brain 
cannot interpolate correctly any longer.

>> The human eye just isn't fast enough for that. 
> 
> 
> Not so.  You can percieve motion up to around 70 to 120Hz - depending
> on the individual, the room lighting conditions and whether the image
> is centered on your eye or viewed with peripheral vision.  That's why
> higher quality graphics card/CRT combo's are run at 72 or 76 Hertz.
> 

Impossible. Impulses from the receptors in human eyes usually last for 
about 1/25th second. So it cannot resolve shorter time periods than 
that. So a sequence of frames with a rate higher than 30fps gets blurred 
by the human eye.

>> Anything else is red herring. During the development of the IMAX 
>> cinema format research was concucted into this area with the result 
>> that higher framerates than the one IMAX uses for the films are plain 
>> waste of celluloid.
> 
> 
> Uh - uh - bad analogy.
> 
> That's because movie film has built-in motion blur that performs a
> degree of 'temporal antialiasing'.  Graphics systems have (in effect)
> an infinitely fast 'shutter speed' - film cameras (and TV cameras for
> that matter) do not.
> 

This is right. I haven't taken that into account.

> For a moving image, the double-imaging effect at 30Hz is very pronounced.
> 
>> Cinema projectors use a little trick to prevent flickering: They 
>> usually process celluloid at a rate of 24 frames/sec. But the shutter 
>> opens and closes not only once per frame as you would probably expect, 
>> but actually opens twice or thrice to simulate a screen refresh rate 
>> of 48Hz or 72Hz depending on the projector.
> 
> 
> Yes - but fast action sequences on traditional (non-IMAX) frame rates
> **SUCK**.  And in any case (as I already explained) the motion-blur
> inherent in the nature of a film camera magically gets rid of the
> problem.
> 
> Doing generalized motion blur in interactive computer graphics is an
> unsolved problem.
> 

Really? That's an article I found while I did a little research for this 
posting. It describes a method which could in fact be very promising:

http://www.acm.org/crossroads/xrds3-4/ellen.html

Another, simpler idea I had before I stumbled over this article was to 
approximate the movement the object makes by a series of linear segments 
each one being as long as the time the frame is displayed. Then you blur 
it along this line segment while rendering the corresponding frame by 
drawing it several times (using alpha-blending, of course) in different 
positions along this line. You could become really sophisticated and 
make the number of drawing steps depend on the object's speed and 
distance from the viewer. One possbile drawback I see with this method 
is that it might irritate the viewer when the object being rendered has 
noticeable direction changes with each frame.

>>> A 1000Hz kernel timeslice - and (in consequence) a usleep that was
>>> accurate to ~1ms instead of ~20ms - would solve a *TON* of problems
>>> for graphics programs that want to run at 60Hz and yet still use
>>> 'usleep' to avoid blocking the CPU.
>>>
>>
>> Why do you want to run at 60Hz at all cost?
> 
> 
> I think I've explained that.
> 
> I can absolutely assure you that 60Hz is the MINIMUM frame rate I'd
> consider for most applications.  76Hz would be preferable.
> 

High framerates are hacks. Only better rendering algorithms are the 
final solutions IMO. How these algorithms would work is pretty clear. 
But they need a vast amount of computational power. And I'm actually in 
a mood to hack a little demo to show off what I think. Maybe tomorrow 
evening.

Gregor