[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Game Loop and Simulation
I'm interested in the game update loop as applied to networked RTS
games. I would like to start some serious discussion on this issue. To
begin, I have reference material from a few sources.
First, Andrew Rollings and Dave Morris write about game loops in their
book Game Architecture and Design:
----- BEGIN QUOTE -----
These game developers attempted to use the frame rate as a timer. They
would choose a frame rate (possibly dictated by the hardware) -- say,
thirty frames per second (fps) -- and construct all their game logic
around that. There would be one AI tick every frame, so, doing the math,
you can see that each game object would be updated to move the distance
it could normally move in each thirtieth of a second. Note that this
technique is still used with platforms that are limited in power and
universally identical, such as the Game Boy. This technique has the
least overhead. No special effort is required to dynamically adjust the
This was all fine and good, but there were two main problems. The first
was that owners of faster machines weren't too pleased that their
brand-new game didn't take advantage of their machine's speed, and that
the game looked exactly the same on their friend's machine that had a
processor half as fast.
The second (and rather more serious) problem was that, below a certain
threshold level of processing power, the game would suddenly halve in
speed. If the AI tick took just longer than one frame to calculate, then
every other frame would be skipped, resulting in a drop in speed. The
same sort of thing could happen if a frame took too long to draw. All in
all, a bit of a disaster.
Fortunately, some thinking was done on the subject, and a couple of
suitable solutions were developed. I'll call them semi-decoupling and
Semi-decoupling is the simpler of the two solutions. The trick is to
decouple the frame rate update from the AI ticks. This technique is
suitable for games that don't particularly stretch the hardware of the
average midrange machine, but also want to run acceptably well on the
The AI runs at a fixed rate as before, but the frame rate just runs as
fast as it possibly can. Assume the AI is running at 30 ticks per second
as before. A separate loop draws each frame, and then waits for the next
AI tick to be complete before drawing the next one. This means that, if
the frame takes too long to draw, it would not be such a disaster as
before. The AI loop would continue independently, because it wouldn't be
stalled waiting for the frame to complete, and the next frame to be
drawn would be correct. The player may notice a drop in frame rate, and
some of the animation may be slightly jerkier, but the important thing
is that the internal AI would still be running correctly. The main
disadvantage of this technique is that your maximum frame rate is
limited to the fixed tick rate of the AI loop. There is no point in
updating the screen if nothing has changed in the AI, so the drawing
loop will be idle while this is going on. This technique is fine, if you
are sure that your AI is going to be run at a constant rate and you
don't mind being limited to that rate. But what if you want your frame
rate to be the maximum possible, irrespective of the speed of the AI
loop? This is where full decoupling comes in. Most games today use this
Full decoupling attempts to run the AI loop as fast as possible. A
reference timer is used to adjust the changes in game object states per
AI tick, so that the game does not appear to speed up and slow down.
This means that, on faster machines, there will be more AI ticks per
second, so, for each tick, the delta value (which is the reciprocal of
the length of the tick) will be smaller. On slower machines, ticks will
take longer, and so the delta value will be larger. Imagine that we are
animating a clockface that has one hand. This hand rorates once every
second. Internally, the AI tick updates this clock according to the
delta value. The clock is drawn at the end of each AI tick. Obviously,
the smoothness of the animation depends on the frequencyof the AI ticks.
The animation will still show the hand performing a full rotation every
second, but the smoothness of the animation will vary.
This is a pretty smart technique that allows a program to max out frame
rates, but it is not *true* full decoupling. The frame rate is still
limited by the rate of AI ticks. It is impossible to have more than
that. To achieve full decoupling would be difficult, but possible. To do
this, the AI loop and the frame-update loop run completely separately,
most likely in separate threads. Then the frame loop is given the
ability to take a "snapshot" of the state of the AI loop whenever it
likes (assuming that it is not so often that it chokes the AI loop), as
long as at least one visible object has been processed. Care has to be
taken to make sure than an object is not halfway through being processed
when the snapshot is taken (it may be in an internally inconsistent
state), and the easiest way to achieve this is to set the snapshot
granularity to the object level.
----- END QUOTE -----
Second, Dave Pottinger of Ensemble Studios discusses different lengths
for the main game update loop in his article Coordinated Unit Movement:
So, assume I wish to run a distributed (peer-to-peer) discrete event
simulation. That is, I want the exact same thing to happen on each node.
Forget animation, and consider just the simulation itself.
The simplest method, the one I've been planning on until now, is to use
a fixed frame rate. For example, suppose I choose 20fps. Then every node
runs the simulation at 20 ticks per second, and attempt to draw a frame
of animation for each tick. If the computer is faster, then it waits. If
it is slower, it drops frames of animation. If it is too slow, then the
entire distributed simulation runs at the speed of the slowest node, or
that node is dropped.
In this case, the fixed frame rate is critical to the synchronizing of
the distributed simulation. Every node must perform the same
Suppose a unit is at location (100,100). It wishes to move to (200,200).
Its movement speed is 4 per tick. Pythagoras tells us the distance is
approximately 141.42. Rounding up, it will take us 36 ticks to make the
journey. That is, 1.8 seconds. Each node can use floating point
calculations to handle the movement, and round the unit's location to
the nearest map point each tick. That puts the simulation always in
discrete integer locations, for any collision/pathing calculations
during that journey.
The key point is: those 36 integer locations along the way must be
identical on every node. It's okay for the floating calculations to be
slightly different (say, different precision), as long as they do not
cause the intermediate integer locations to diverge, even once.
So, a fixed frame rate is simple to handle, computationally, in the
distributed simulation. How can I use a less coupled game loop and still
maintain the integrity of the simulation?
Let's suppose first, that one node runs at 20 ticks per second, but the
other is more powerful and can run at 30 ticks per second. Can this be
Both must make that same journey in 1.8s. The first node can do that in
36 ticks, but the second node is trickier. In that time, it will
simulate 54 ticks. Because of that, the unit will move 2.67 each tick.
If a collision happens during that journey, it may occur at different
time and space coordinates on the different nodes! Trivially, because of
the finer granularity of the second node, it may collide earlier.
Clearly, I cannot allow the simulation to run differently on each node.
One solution, would be to have the faster node run two simulations, in
effect. It would run a 20 ticks per second simulation of the journey for
simulation purposes, and a 30 ticks per second simulation of the journey
for display purposes. But this seems wasteful. It allows the second node
to potential display something that is not in fact happening (such as
melting into an object it should collide with, for a frame or two). And
it means deciding how slow to run the simulation.
That last point is key, I think. For a distributed simulation, I don't
see any way of getting around the fact that each node must run the same
simulation, at the same rate. Let's try this.
Let's say that each node can simulate as fast as it wants, but that
pathfinding, collision detection, etc. will only occur at a fixed rate
of 10 ticks per second. Further, let's require each node to have a tick
at that exact time. So, we are saying that each node can simulate at its
preferred granularity, but all the logic will occur at fixed ticks: 10
In this case, the slow node simulates an extra tick between every
"logic" tick. The fast node simulates two extra ticks between every
If the moving unit is going to have a collision 0.5s into its journey,
it will on each node. This will occur at tick 10 on the first node, and
tick 15 on the second. That's "logic" tick 5 for both.
Now, it may be that on the first node, the objects overlap on tick 9.
And on the second node, they may overlap on tick 13. But since those
aren't "logic" ticks, the collision doesn't occur. The simulations may
diverge in state, but not in logic. 10 times per second, they are
The above assumes each node runs a fixed simulation rate, but it could
easily be extended to variable rate, so long as the above constraints
are met. I worry about the precision of floating point numbers. How can
I be sure that simulation state is correct on each node, at each "logic"
I suppose I could always plot the unit's positions at those critical
ticks ahead of time, and even if it diverged on one node, "snap" the
unit's position to that location when appropriate.
Anyways, I hope this made sense. I'd really like to hear serious
comments on all this discussion, as I'm hoping to decide how much of a
hassle all of this will be to implement, or if there are critical flaws
in my thinking.
Marc A. Lepage
Minion open source game, RTS game programming, etc.