[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Of time scaling and FPS
"Miguel A. Osorio" wrote:
> Hey people,
> I have some matters about time scaling and a little FPS problem I'm
> having on my project to discuss. About time scaling, just to make sure,
> the correct way to calculate one frame time could be:
> // Main loop - no time scaling
> The calculation I'm talking about would be something like:
> // Main loop
> tf = gettime()
> delta = tf - ti
> ti = gettime()
> update_states_and_variables() // Use delta here on time dependent
> Is this one right way to do it?
That's one way...so all of your equations for motion say things like:
position += speed * delta ;
...the other way is to decide on a 'nominal' frame rate that's significantly
higher than you ever intend to render (say 120Hz) and each frame to run the
'update states and variables' code in a loop:
static float remaining_time = 0 ;
remaining_time += delta ;
while ( remaining_time > 0 )
remaining_time -= 1.0 / 120.0 ;
update_states_and_variables () ;
...now your update routine can have much simpler non-time-dependent code
that says things like:
position += speed ;
This doesn't generate motion that's as smooth as the other way unless
you can run the update routine *MUCH* faster than the renderer though - so
you end up doing more work.
However, for some genre's of game, this is the way to go because as the
programmer, you have better control. For example, if you have a character
that jumps - then with the first approach, the *exact* maximum height
he'll reach will depend to some slight degree on the update rate of your
graphics...not by much - but enough that if you've *carefully* designed the
scene so that he has to jump *just* right to land on some platform - then
on some computers the jump will be easy and others it'll be impossible.
Without a boatload of testing and annoying tweaking of initial accellerations
and stuff, it'll be a pain.
With the second way, it's easy. The update routine always runs the exact
same number of times on all machines - it's just the graphics that'll be
varying in speed.
So long as your update routine takes a *small* amount of time relative to
graphics, sound and everything else, this will work just fine. However,
if you find a dog slow computer, you may find that by eating a constant
number of CPU cycles on the update routine no matter how fast the computer,
you'll make a slow machine update even more slowly than it would have...
to the point where you may never actually do any graphics!
However, that's a lot easier to test and control than the first case.
My TuxKart game works the first way - and on slow machines, I get bad
problems with collision detection and the way one object richochets
off another - Karts keep driving through walls and stuff like that.
It's a real pain to fix it. I wish I'd implemented it the second way.
There is a third way - pick an update rate that all reasonable computers
can manage - and slow all faster machines to that rate. Anything that
can't make your minimal performance is declared "too slow to run this
program". Now you don't need any time code at all - everything can
just be hard coded. My original TuxAQFH game works that way - and it's
great! You don't get 200Hz frame rate on a GeForce-4 ti 4600 - in fact
you get the exact same speed as you get on a Voodoo-2 - but it's plenty
fast enough to be playable - and it's a lot easier to write - so why would
you care? In fact, you could even adjust the amount of detail you render
to keep the frame rate around your nominal rate so that you'd get nicer
graphics on a fancy modern card.
I'm not particularly advocating one way over the other - I'm merely
explaining the alternatives.
> Anyway, on to the FPS problem. Using the method described above, I use
> delta to scale my time-dependent variables, in order to get some smooth
> animation. Thing is, I don't. The whole animation looks all jagged up,
> without using time scaling it goes on fine; do note that, however, I
> don't know why, the FPS mark keeps jumping about *all the time*, it
> never settles on some average. Does anyone know a reason for this? Or am
> I doing the whole time scaling calculation the wrong way?
Probably - yes. It's easy to overlook some variable that needs to be
scaled by the time-step.
That doesn't explain why your rendering time is fluctuating so much though.
Start sticking gettimeofday calls all over your code and see what's
varying. I find it useful to store tens of seconds of timings into
a large buffer and to dump them out periodically in a format that
gnuplot can understand - then you can draw pretty graphs to show
how long each part is taking...that's really useful in the long term.
----------------------------- Steve Baker -------------------------------
Mail : <email@example.com> WorkMail: <firstname.lastname@example.org>
URLs : http://www.sjbaker.org
http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net