[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Main character animations.



Mark Collins wrote:

> > However, if you want your game to run full-screen on a 266MHz CPU, you'll
> > probably need to avoid OpenGL.
> 
> (pet peeve)
> 
> You're right, but you shouldn't be. I mean, look at a game like Quake (the
> original non-GL version). That had no acceleration, and would have no
> problems running at a high resolution on a 266MHz CPU.
> 
> Granted, the poly count was pretty low, but the "project" which spawned this
> rant probably wouldn't need much better.
> 
> Isn't Mesa3D optimized in the slightest? If not, why not?

Well, the problem is that OpenGL is an API that has a lot of 'richness'
at the pixel level.  There are a truly insane number of per-pixel operations
that are possible.  The performance of software renderers is almost entirely
determined by the time taken to render pixels - so that's a problem.

OpenGL was *designed* for hardware rendering - software renderers have
a really hard time of it because they are continually having to deal with
a bunch of potential rendering options that very few applications actually
need.

Mesa's software renderer is a faithful implementation of the OpenGL
specification - in all of it's complexity.  Many of it's users are NOT
writing games and don't care about realtime (eg scientific visualisation),
and for them the utter adherance to the OpenGL spec is cruxial.  Also,
SGI (who own the OpenGL name and license it) only tolerate Mesa as an
unlicensed implementation because it meets the specification and passes
the OpenGL conformance test suite.  A simplified Mesa couldn't do that
and would probably provoke rude letters from SGI's lawyers.

The Quake software renderer can take a LOT of short cuts and other
liberties that OpenGL does NOT permit.

The fastest implementation of OpenGL-in-software that I know of is
the one that SGI wrote for Windoze (and then promptly stopped
supporting).  What they did was to have their OpenGL software driver
generate machine-code on-the-fly for whatever combination of OpenGL
pixel modes you had enabled at the time.  This code was optimised
on the fly too.

That mean't that if you did not have (say) dithering enabled then
it would not have to say:

    if ( dithering )
      do_this ;
    else
      do_that ;

...it would just generate machine code to 'do_that'.

That was clever and quite efficient (although still not as good
as Quakes' renderer which 'knew' a good deal more about the application
than a general-purpose OpenGL renderer could ever know).

The Mesa team has considered going that route - but the trouble is
that Mesa is supposed to be *PORTABLE* - it runs on Alpha's, MIPS,
SPARC's, 68000's, x86's, etc.

Writing a machine-code generator for each of those platforms would
be a lot of grief - and very hard to maintain unless one person
on the team owned all of those kinds of computers.

You might also suggest that one could resolve this 'too much state
testing' problem by writing code for each combination of states and
having a big table of pointers to rendering functions.  However,
there is a massive combinatorial problem here - there are hundreds
of possible states leading to MILLIONS of possible rendering functions.

That's why the SGI implementation wrote custom machine code.

The real solution to software rendering is to invent a new, simplified
rendering API (OpenGL-- or something) that would call OpenGL API when
being rendered using hardware 3D - and use a much simpler software
renderer when there was no hardware present.  I think there have been
OpenSourced efforts at such things (TinyGL?  Something like that) - but
those API's are necessarily less capable than the full OpenGL spec and
with hardware 3D becoming so pervasive, anyone who is remotely interested
in 3D is going to put down the $100 it costs to get a GeForce-2 or
whatever and see their software run 100x faster than it would with
any kind of a software implementation.

The only remaining difficulty is with LapTops - with these beasts you
can't UPGRADE to a 3D graphics card - and most of them come either with
crappy accellerators that are not worth supporting with Mesa - or with
no accellerators at all.

That situation is gradually getting remedied - nVidia now produce quite
impressive 3D chipsets that are well suited to LapTops (GeForce-2GO for
example), and I heard that ATI are doing the same with a low-power version
of the Radion family.  Both of those should run OK under Linux.

So, this is a problem that will go away over the next year or two - and
as a consequence, it's quite hard to get developers very excited about
implementing a new API that nobody will use and which will be obsolete
in a couple of years.  For the same reason, it's hard to get Mesa
developers interested in doing anything very significant to speed it
up either.  They are making small tweaks to improve performance all the
time - but this would take a RADICAL overhaul.

----------------------------- Steve Baker -------------------------------
Mail : <sjbaker1@airmail.net>   WorkMail: <sjbaker@link.com>
URLs : http://www.sjbaker.org
       http://plib.sf.net http://tuxaqfh.sf.net http://tuxkart.sf.net
       http://prettypoly.sf.net http://freeglut.sf.net
       http://toobular.sf.net   http://lodestone.sf.net