[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Tools



Mithras wrote:

> This question, the true priority of assembler for linux game hacking, is
> very important to me.  I don't yet have the experience to support an
> opinion of my own, but I have got a book by Michael Abrash on graphics
> programming.

Well, things change.

Back in the days when that book was written, people were doing 'pixel
pushing' in pure software on 486's and such under DOS.  Back then you
could map the graphics card into your program's address space and
treat it just like a block of memory.  The ONLY way to get speed for
the sorts of nasty low level operations you had to do was to resort to
machine code.

Now, we have hardware blitters, hardware polygon renderers, fancy
windowing systems and MUCH faster CPU's.  Taken together, this means
that (a) you can't do low level pixel rendering direct to the hardware
yourself - and (b) you don't need to do that anymore.

Compare a game like 'Doom' to 'QuakeII'.

 *  Doom had to run on 486's, on standard VGA cards - and under DOS.
    Extreme dirty tricks in the inner loop of the column rendering
    code would make HUGE differences to the performance of the game.
    Assembly code gave you the degree of control you needed to get
    massive speedups.

 *  QuakeII uses OpenGL - with a hardware 3D accellerator doing all
    the per-pixel processing work.  About 80 or 90% of the CPU time
    is spent in the OpenGL library - which the game programmer has
    nothing whatever to do with coding.  Even the most ferocious
    optimisation of Quake's code will make at most 10% improvement
    to the game speed.  The way to get speed is to use OpenGL
    intelligently - to know which OpenGL primitives are too costly
    to use in order to cut down that 90%.  Machine code is almost
    irrelevent because it just doesn't help any more.

> >From 'graphics programming, Black Book' p. 29:
> : Never underestimate the importance of the flexible mind.  Good assembly
> : code is better than good compiled code. 

That's certainly true.  However, you can go out and download a good
compiler and be producing good compiled code in an hour or two.  It takes
YEARS of dedicated effort to learn to be a "good" machine code programmer,
and a poor machine code programmer will be hard pressed to beat a good
compiler.  Even after you've learned, it takes maybe ten times longer to
write something in assembler than it does in (say) C or C++.  The likelyhood
of accidentally introducing a bug is much higher in assembler - and since
debugging takes longer too, you'll spend a LOT of time getting your
'good' assembler to be better than a good compiler.

The other problem with being an expert assembly programmer is that new
CPU's appear about every year - and although they are *mostly* compatible,
all your hard-earned knowledge about the instruction sequence A/B/C being
better than D/E/F has to change.

I think that in a large, commercial programming house, you can afford to
have a couple of assembler Guru's on the team who look for the slowest
parts of the code that their colleagues put together and clobber it
with assembler.  In a small team - or a solo effort, you are much better
off spending your time looking for better algorithms and sticking to
working in C or C++.

Those commercial assembler guru's don't have to worry about things like
games design, high level algorithmic stuff, AI, etc, etc - so they can
dedicate their lives to learning all the nasty little tricks in the
latest Celeron or K6 varient.

Lastly, there is a 'feedback' effect here.  In early microprocessors,
the CPU designers expected you to be hand-writing assembly code - and
designed the instruction set to be human-friendly.  However, as compilers
have improved and people have used less hand-written assembler, the
CPU designers have begun to look at the kind of instructions that the
compiler is most likely to use.

The whole 'RISC' concept is founded on that premise (although 'RISC'
has never made it into PC's - it's what 'real' computers use).  Even
Pentiums are going that way. Things like interleaved operations (where
you can use the cycles that (say) a floating point operation consumes
to sneak in an extra couple of integer operations) - are quite hard
for a human to optimise well - but for a compiler, it's a relatively
easy optimisation.

Hence, the future is for CPUs that are harder and harder for
humans to program and more and more oriented to compiler code
generation.

So, unless you are thinking of a career in optimising other people's
code, or compiler writing, or device driver coding...concentrate on
becoming the best C/C++ coder you can be...get good at optimising
ALGORITHMS.

-- 
Steve Baker                  http://web2.airmail.net/sjbaker1
sjbaker1@airmail.net (home)  http://www.woodsoup.org/~sbaker
sjbaker@hti.com      (work)