[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Pure software rendering or hardware accelleration?
- To: email@example.com
- Subject: Pure software rendering or hardware accelleration?
- From: Gregor Mückl <GregorMueckl@gmx.de>
- Date: Wed, 18 Aug 2004 20:04:25 +0200
- Delivered-to: firstname.lastname@example.org
- Delivered-to: mailing list email@example.com
- Delivery-date: Wed, 18 Aug 2004 14:06:45 -0400
- Mailing-list: contact firstname.lastname@example.org; run by ezmlm
- Reply-to: email@example.com
- User-agent: KMail/1.7
I'm currenlty in a pretty deep mess. I'll need to explain some things before I
can come to the real problem.
I wanted to do a Myst 3 like rendered adventure game; for that I wanted to use
rendered panoramas. When I began developing the method things started out
fine: I can render a static scene and project the rendered images as textures
onto a cube in order to give the illusion of a panorama. I decided to do the
ingame rendering with OpenGL. It worked out nice except for two problems:
First, the textures need to have resolution of at least 1024x1024 pixels each
if this should look at least partly convincing. But this is a lot of data.
Second, I found out that updating those textures using glTexSubImage2D() is
insanely slow (40ms minimum), making it impossible to play animations fast
enough on those textures. With this discovery the foundations of the game
engine are breaking away. I can never play back an animation with reasonable
speed on one of those textures. If I needed up play back two animations
synchronously it'll lag hopelessly.
Now, is there a trick to optimize OpenGL texture updates? Or should I turn
around and rewrite the whole rendering code to run entirely in software?