[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re^2: Webpage: Submissions, ideas, etc.



Hi Adrian,


AR> > (1) Some time ago Adrian posted a draft of a general PenguinPlay
AR> > introduction. We could take this as our "main intro".
AR> I don't actually remember such a docment, but I assume it exists.
AR> Perhaps someone can produce it.

I attached it at the end of this mail. It's really from you.


AR> >         * evtl. some visual development tools (GUI designer, network
AR> >           constructor, ...)
AR> (That last would be a long way off.)

That's why I wrote "evtl." ;)


AR> > (6) Is it possible to provide a facility for downloading (parts of) the
AR> > homepage as compressed archive (tgz or so) for offline reading? Perhaps
AR> > generated on the fly by some CGI script?
AR> Can we just use CVS?

*We* can use it, but even these guys using the internet exploder should be  
able to read our pages offline ;)


Cu
        Christian


--------------3F5741AD4A61BCBA4E42CDB3
Content-Type: text/html; charset=us-ascii; name="design.html"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="design.html"

<BASE HREF="/home/raka/workbench/GSDK/stuff/design.html">

<HTML>

<HEAD>
   <META http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
   <META Name="description" Content="PenguinPlay - Doesign">
   <META Name="keywords" Content="game, software, development, kit, linux,
    graphics, network, sound, input, api, spec, code, design">
   <META NAME="classification" CONTENT="General HTML">
   <META NAME="abstract" CONTENT="Overall design of PenguinPlay">
   <META NAME="units" CONTENT="All units are in pixels unless otherwise
specified">
   <META NAME="author" CONTENT="Adrian Ratnapala">
   <TITLE>PenguinPlay - Design Overview</TITLE>
</HEAD>


<BODY BgColor="#ffffff" Text="#000000" LINK="#FF0000" ALINK="#0000FF">





<H1>Overall Design of the PenguinPlay Game Development Kit</H1>

This document outlines the design of the <A Href="index.html">
PenguinPlay SDK</A>.  It's goal is that net hoppers will read
it and make comments one what they like, and more importantly,
dislike about it.  So this is really a call for suggestions.
<p>
WARNING:  I have tried to write this document based on what we have
broad agreement about, but sometimes it is difficult to gauge what
exactly there is agreement about.  As a result this document is largely
my personal opinion, rather than that of the group, especially the stuff
about the Event Manager.

Also note that info is a bit sparse when it comes to thins related
to teams other than graphics.  This is because I am a member of
the graphics team.



<H2>Overview</H2>

PP (or PenguinPlay, or the GSDK or whatever), is divided a couple of
ways.  One division is between the development teams, Sound, Graphics,
Network, Input and whatever else we think of.  Another division is
between layers.  PP is conceptually made of two layers with the
ooh so exciting names of Layer P and Layer O.
<p>
Layer P (P for Procedural), is a C API which provides all the basic
services of the GSDK.  So far we think this means:

<UL>
        <LI>Network protocol interface.
        <LI>Sound device interface.
        <LI>Low/medium level graphics 2d and 3d interfaces.
        <LI>I/O, transparent access to WAD files or something similar.
        <LI>Basic medium level input device interface.
        <LI>System event manager.
        <LI>Thread interface.
</UL>

Parts of layer P may have a C++ interface as well, I don't know, for the
most part however layer P is a C API.  Layer P is meant to be a
piecemeal set of tools which give access the resources available.
Things which are conceptually part of layer P may not be part of the
GSDK at all, for example the graphics libraries are unlikely to be
written by us, layer P 3d graphics will be OpenGL.  More friendly,
coherent API's can to be built on top of layer P.  The only such API we
plan to implement is Layer O.

<p>

Layer O (O for Object), is an object oriented C++ API.  This is
definitely <em>not</em> simply a C++ wrapper for layer P.  Layer
O supplies real functionality not available in layer P.  The
idea is that layer O will be a nice API which provides classes
for things likes sprites, players, monsters etc.
So far we (or at least I) think layer O will contain:

<UL>
        <LI>High Level 2d.  Animation, moving sprites, correct draw order etc.
        <LI>High Level 3d.  3d solid objects, polygon culling mechanisms etc.
        <LI>Gui Framework.  Windows, events etc.
        <LI>Control Structures.  Collision detection, trajectories AI and stuff.
        <LI>I/O.  An object persistence framework which uses layer P I/O.
</UL>


<H3>Portability</H3>

Nothing in either the Layer P or the Layer O API's should be platform
specific.  Thus a programmer should be able to trivially port a
PenguinPlay program to any system that PenguinPlay is supported on
(if it was written properly in the first place).

In terms of porting the library itself things are more difficult.  Since
layer O is built on top of layer P, then wherever layer P gets ported,
layer O should follow without much difficulty.  However, since layer
P is low level some parts of it might take a while to port.  In the
mean time layer O might be ported first.

For example if we do a port to Win95 and it turns out that Layer P
sound is really hard to port.  Well never fear, whatever Layer O
support there is for sound should be quite abstract, and it
should not be a problem for layer O to use the standard Win95
sound interface.

This means that layer O programs may be more portable than
straight layer P programs.  However some core parts of layer
P, such as the Event Manager, will probably always have to be
ported before layer O.



<H2>Components of Layer P</H2>

Here are some more detailed descriptions of the different
components of layer P.


<H3>Network</H3>

Don't know much about this.  Need input from the network people.
For now check out the network team page.  LINK COMING.


<H3>Sound</H3>

Again I claim ignorance.  Sound people should write this section.  I
suspect however that layer P sound will be something like what is
contained in the sample header <A Href="sound.h">sound.h</A>.
Be warned, I think the contents of that file is quite old and
things may have changed.  Check out the sound team page.

<H3>Input</H3>

This just my overview, it needs to be rewritten by one of the
input people.
<p>
In layer P their will be a couple of distinct libraries, one for
each kind of device, mouse, keyboard, joystick etc.  (BTW Why are
they separate libraries?  Is it so you can swap drivers for
different protocols, or what?).  On top of this lies the main
library which I assume is supposed provide the device non specific
functionality of the library.
<p>
Apparently stuff like device abstraction will go into the layer O
API.




<H3>Graphics</H3>

At last, something I know about.  Layer P graphics will mostly be
implemented outside the actual PenguinPlay API. The actual
plans are a bit up in the air at the moment, but it hasn't
stopped us from coding so we are happy.
<p>
The system should be based on the libggi-dynamic, see the
<A Href="http://synergy.caltech.edu/~ggi">GGI Project</A>.  This library works
under many, many different platforms including some non unix ones.
It provides ways for getting a framebuffer, as well as some basic
drawing primitives.
<p>
This could be augmented by Mgl.  Mgl is graphics library from scitech
software (PUT LINK IN HERE).  It is currently not available under Linux,
but a port is planned.  Scitech seems to want to make a GGI based
Mgl.  This suits us just fine.  I am afraid I don't actually know
much about Mgl, but everyone who does seems to like it.
<p>
There are some problems with licensing issues.  Mgl is freely
available, including the source.  However we are not quite happy with
the license.  Scitech might change the license, in which case
we would be most happy to incorporate Mgl into PenguinPlay and
probably even do the port.
<p>
As for 3d graphics, we think OpenGL is the way to go.  There is already
work to make <A Href="http://www.ssec.wisc.edu/~brianp/Mesa.html"> Mesa </A>
work under GGI.  We plan to piggyback on this.
<p>
Having said all this, I have to say that this is all for the future.
At present Layer P graphics is a small set of graphics primitives
I wrote myself which draw into a linear framebuffer.  On top of this
we are developing Layer O graphics.  When the time comes we should
be able to switch over to something else with a minimum of fuss.


<H3>I/O</H3>

<H4>The Filesystem</H4>
The core idea of layer P I/O is a ``filesystem''.  The filesystem allows
to give transparent access to things like WAD files. By WAD file I mean
some kind of archive which gives us high speed file access, this may or
may not be the same thing as a DOOM WAD file. The user can ask to open a
file, the file may be stored in a WAD archive, or it may be just on
ordinary Unix file, the user doesn't care.  PenguinPlay looks up the
file, if it turns out that it is a Unix file then it just opens the
file.  If it turns out to be part of a WAD archive, then it reads the
WAD file.  I suppose WAD files would look like directories.

<p>
I believe Layer P I/O should be implemented by using custom
streams.  This way the user opens the file with something
like
<p>
        FILE* ppOpenFile(char* filename).
<p>
If the file is an ordinary Unix file, then this ends up being a front
end to fopen.  If not, then a custom WAD file reading stream is returned.
The user can then read and write to it using fprintf and friends.
Here my ignorance comes back again, we might need some other mechanism
for file opening as well if we are to gain any advantage from having
wad files, is this true?


<p>
I think this is the way to go for several reasons.
<OL>
 <LI> It is familiar to C programmers, there is no need learn an new
      interface.
 <LI> We get formatted I/O for free via printf, scanf and friends.
 <LI> These streams can easily be passed on to 3rd party libraries
      which don't know anything about WAD files.
</OL>


<H4> File Formats</H4>

There is also the matter of various file formats, jpegs, ppm, mpeg, wav,
au etc.  PenguinPlay should provide support for reading these files,
perhaps via 3rd party libraries like libjpeg.  Strictly speaking this is
really a matter for the individual parts of the GSDK, jpeg & ppm belong
to graphics, wav & au to sound etc.  However I thought I would mention
them here.  Whatever support each team has for a particular file
format it should be aware of the above ``filesystem'', and use it
to open files.



<H4> Miscellaneous</H4>
I suspect that some kind of interaction with the network API will find
its way into the I/O.  We'll keep you posted.



<H3> Event Manager </H3>

The event manager was a bit controversial when first I proposed it.  I
think there is now agreement that it should exist, but I am not sure.
So this part of the API may disappear or change drastically.
<P>

Because of the real time nature of many games, they will be doing
lots of asynchronous I/O, interrupt handling and the like.  The
event manager endeavors to provide a robust way of doing all this.
<P>
The idea is that an event can be raised in one of several ways.
The user could have defined one or more handlers for that event,
PenguinPlay will try to call all those handlers.
<P>
The  kinds of events thought of so far are:
<UL>
        <LI>System signals.  I.e a front end to sigaction.
        <LI>Timeouts & a metronome ticks.
        <LI>I/O becoming available.
        <LI>User (that is programmer) defined events.
        <LI>Thread entry (see bellow).
</UL>
<P>

System signals are a front end to the signal() and sigaction() system
calls.  There is however some extended functionality.  A standard Unix
signal handler has to be signal safe.  With the event manager however,
you could mark an event handler as signal unsafe, in this case when the
signal is raised the event manager will not call the event handler
immediately.  Rather it will defer it till the earliest possible
opportunity.
<P>
PenguinPlay should provide a way of setting timeouts and metronomes.
These would be a front end to the system timers.  When a timeout
expires, then an event is raised.
<P>
The user can set an event handler to wake up when a particular IO event
occurs on a particular fd.  An I/O event happens when a call to select()
terminates, or when a SIGIO or similar signal is received.  When this
happens PenguinPlay will determine what kind of event has taken place
(read available, write available or exception), as well as what Fd it
occurred on.  This is then wrapped up in an event and sent along to an
event handler.  This is much cleaner and easier than fiddling around
with SIGIO and select() yourself.
<P>
User events are raised when the user calls RaiseEvent() or some such
function.  The user should be able to chose whether RaiseEvent()
transfers control straight to the event handlers, or just queues
them for handling at the next opportunity and then transfers control
back to the caller.  PenguinPlay itself should use user events to
inform you when something happens, e.g. the Input system could
inform you when a keystroke takes place.
<P>
The EM (Event Manager) will provide something like the following
functions.
<p>
<UL>
        <LI><EM>FlushEvents()</EM>  Handle all those events which
                                    have been raised but deferred.
        <LI><EM>WaitForEvents()</EM> Perhaps a front end to select().
                                    It will wait till something happens
                                    and then exit.
        <LI><EM>DoEventLoop()</EM>  Loop between FlushEvents and WaitForEvents
                                    to implement a sort of event loop.
</UL>


<H3>Threads</H3>

Thread handling will probably be part of the EM.
Conceptually this is what is going on.
<p>
A thread exists to execute some task.  In this context a
task is to handle an event.  So if and event is raised, PenguinPlay
might elect to create a new thread to handle it in rather than the old
one.  This has the advantage that the program can take advantage of
threads implicitly, without even having to know if threads are available
on a given system or not.
<p>
If the user wants to create a new thread you actually raise
a special kind of event, the entry point into the thread
is that event's handler.
<p>
This may seem like a strange way of doing things, but I allows
the next trick, which I find really cool.  When a thread has
finished executing its task, instead of dying immediately, it
hangs around for maybe a second waiting to see if a task becomes
available for it, i.e waiting to see if an event is raised.
If one is raised then the thread executes the task, if not
the thread just dies.
<p>
What this means, when it is combined with implicit thread creation is
that when there are many concurrent tasks to be done, there could be
lots of threads at work.  When things settle down a bit and there is not
as much work, then the threads start dying off until the population
reaches an appropriate level.  The point of delaying the thread
destruction is that we can minimize the overhead of thread creation
and destruction.

<P>
Of course all this might cause complications.  If threads are created or
switch implicitly to handle events the user has not guarantee what thread
he is running in.  This might be OK, but it could cause syncronisztion
nightmares, so the user should have some control over what thread an
handler is called in.
<P>
For example they might want to order that it is always called from the
thread which raised the event.  Also they might want it to always be
called from some other thread regardless of where the event was raised
from (this guarantees that only one copy of a given event handler is
being called at any one time, useful if the thing is not reentrant.)
The EM should support all this.


<H2>Layer O</H2>

Layer O is an object oriented games library written on top of layer P.
It is not simply a C++ wrapper for layer P, instead it should provide
a clean consistent object oriented framework for game development.
Here are descriptions of some of the components planned for layer O.


<H3>Gui</H3>

This should provide a nice friendly GUI system similar to toolkits
like Motif, Gtk or Qt, but probably less feature rich.  It will
also be better tuned to games, for example there would probably need
to be fast response ``realtime'' event handling.
<P>
Also there should be support for the sort of GUI tricks game programmers
do, like having transparent dialog boxes.  We are considering one
framework in which windows and suchlike are pixmaps, and should probably
be drawn using the PenguinPlay 2d library.  This is good, but I
have some reservations about it.  We will keep you posted.


<H3>2d Graphics</H3>

Steve Baker has already published a <A Href="gsdk_graphics_api.html">
High Level 2d API</A>.  The layer or 2d will be based on layer 2
of this API.  There will however be some changes.  Notably the
API will not be based on OpenGL.
<P>
The core feature of this api is that it supports the idea
of ``layers''.  Layers can be stacked up on top of each other,
and independently scrolled.  When you draw into a layer the
actually drawing requests are saved till later.  When it
is time to refresh the screen, then the layers will be
traversed from back to front actually drawing into the
framebuffer, thus we can guarantee correct draw order.

For more information see <A Href="graphics_design.html">
Graphics API Design Overview</A>.


<H3>3d Graphics</H3>
Nothing has really been planned yet.  Presumably the central
idea is that 3d objects would be wrapped up in a C++ object.


<H3>Control Structures</H3>
This is for things like trajectories, collision detection,
AI, inverse kinematics and the like.  Nothing has been
planned really.
<P>
I have a vague vision of an object like an ``agent''.  Which has
components such as collision detector, AI  and other stuff I can't
remember.  The idea is that the components could be put together to form
an agent in a mix&match sort of way.  It's all done with multiple
virtual inheritance and other C++ black magic.


<H3>Object Persistence</H3>

Objects should have some coherent way of saving and loading themselves
to and from a stream.  You know, all the usual stuff with Store and Load
virtual functions.  PenguinPlay should provide things like PutInt and
GetInt to avoid trouble with endianness.


</BODY>

</HTML>


--------------3F5741AD4A61BCBA4E42CDB3--



Christian Reiniger (Germany)
e-mail: warewolf@chris2.mayn.de