[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: OpenGL: a viable game-programming API?



Joel Stanley wrote:

> This is pretty much the heart of the matter as far as I'm concerned. If
> the tools exist to do what you want to do, how far must one must go to
> truly understand what _is_ going on underneath the hood? I mean, the whole
> point of OOP and data abstraction/method hiding/etc is that I can take
> this neat group of tools, learn how to use them, and not even *care* what
> they're doing with the hardware.

Abstraction is nice. But this whole idea of "program to the abstraction
layer and forget about everything else" is IMHO dangerous. In the world
of Win32, we're seeing fewer and fewer programmers and more and more
Visual Basic junkies whose only purpose is slapping together chunks of
Microsoft-authored code. Microsoft itself is really hurting for
programmers as they only accept people who really know their Win32 and
those are getting rarer and rarer.

I have a nice book, "The Black Art of 3D Game Programming" that takes
you through matrices and transformations and Z-buffering and all that
stuff. It was written for DOS programmers using fixed-point math but it
translates quite nicely to Linux/svgalib with floating point (gotta love
those high-speed FPU's in the Pentia :)). I don't think you'll find such
a book around any more. It's all "Teach yourself Direct3D in 21 days" or
somesuch. This knowledge has been locked away and confined to academic
courses by Microsoft's you-don't-need-to-know-that-we'll-take-care-of-it
approach.

Specialization is for insects. :) And if you want to be a game
programmer, it's good that you acquire at least a little knowledge in
this sort of thing. You never know when you find something that just
doesn't work as you want it to and want to improve it. For example:
Initially the GAMES sprite library, Sprite32/X, was written using
pixmaps and XCopyArea. Well, that was all well and good except it tended
to be slow in certain cases under Xfree. And I really wanted to do cool
things like scaling and alpha-blending in the future. Sooooo.....
without damaging the original code (it's just disabled for now) I wrote
my own blitter and used XShm to put the final composite image on the
screen. It does clipping and everything. Then I rewrote the central
portion in x86 ASM. 2 or 3 years ago when I started this I never
imagined doing that, I thought I'd just use the blitting functions
inside X (and before that, DirectX under 'Doze) as a crutch and rely
upon them to do the "dirty work". As it turns out, hacking up a blitter
function really isn't too bad. And I learned a lot in the process. I'm
thinking of retrofitting the library onto the fbcon driver :)

OpenGL is a real work-saver when it comes to using 3D. And it allows you
to work on top of a wide variety of hardware and software. (Imagine your
game running on an SGI 3D Visual Workstation! *drool*) But if you know
how, for example, Mesa's software renderer works, then you can improve
upon it. Believe you me, it really does need improving. Not being able
to improve on something makes for software limitations. And we game
hackers think limitations are evil. :)

> Games are _almost_ getting to the point where some form of hardware
> acceleration is expected, even if that hardware acceleration is only in
> the form of a $50 3dfx (vooodo 1 chipset) card...so why not take advantage
> of that? In a way, the days of extremely low-level optimization (the demo
> scene, early 3D games, etc) are close to being over...but does learning
> those optimization techniques still have merit? If one learns all of the
> low-level fancy schmancy algorithms for all of the drawing stuff, and
> learns how to do them _well_, that knowledge could certainly apply to
> pushing the limitations of the current (acclerated) hardware...thoughts?
> :)

I think so. For example, learning the basic matrix transforms allows you
to optimize them before feeding them to your Voodoo card. :)

> Where's the line drawn between understanding an algorithm and implementing
> it? I mean, I understand the line-drawing algos I've seen and the
> circle/ellipse drawing algos I've seen, but I'd have to have a lot of
> overhead (setting display modes, getting used to SVGALib, etc, etc) if I
> were to actually implement -- more stuff I'm not familiar with...the
> question is do I _need_ to be familiar with it? >:)

It's a good idea to be familiar with it. Let's say just for good bull
you decide to write a tiny little 3D engine that draws all your models
in wireframe real fast, so that you have some idea of what they'd look
like before putting them through OpenGL. Bresenham's algorithm will
certainly come in handy here. (Bresenham is the basic one; I've heard
there're faster line-drawing algos.) 

> Exactly...but when trying to learn this stuff is it important to
> understand the inner workings? Or could I simply screw around in OpenGL
> and pick up on low-level stuff if and when I needed it later? ;) I have
> yet to become an optimization freak except when cutting O(n^2)+ algorithms
> into logarithmic or linear runtime if and when I can.  Quite frankly, on
> modern processors and hardware, are all of the optimization techniques
> available (like the volumes of optimization tips written in Abrash's Black Book)
> even worth anything? If I were to write a Quake clone with accelerator
> hardware, would I even need all of the optimizations? :)

Depends on the level of detail you want in your game.

> On a side note...how is OpenGL as a 2D API? If I wanted to make a
> nostalgic platform romp...:)

I think it'll be OK on top of a 3D accel. :) Mesa's just too slow in
software mode.

-- 
----------------------------------------------------------------------
Jeff Read <bitwize@geocities.com>
Unix Code Artist, Anime Fan, Really Cool Guy