[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
[pygame] Python - Pygame - PyOpenGL performance
- To: pygame-users@xxxxxxxx
- Subject: [pygame] Python - Pygame - PyOpenGL performance
- From: Zack Schilling <zack.schilling@xxxxxxxxx>
- Date: Thu, 26 Feb 2009 14:04:42 -0500
- Delivered-to: archiver@xxxxxxxx
- Delivered-to: pygame-users-outgoing@xxxxxxxx
- Delivered-to: pygame-users@xxxxxxxx
- Delivery-date: Thu, 26 Feb 2009 14:04:49 -0500
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:from:to :content-type:content-transfer-encoding:mime-version:subject:date :x-mailer; bh=fl+oJWu+Ch8r2Uut8GOO9Sh7oIcUvbN32sSAiWHnfD8=; b=SLKOQmhLm/D8wO9Ndd6WXCReABL9Kq7XjxRUQ/rJ9Eg+jMIDKycLen5kaO1tHrC4QS P+aOcNExUEPCx7fy1K15Q5X0fmvLRcfz6/1YpKe2lYrETE9ehP+WaYBpDhePAAzQnazT OqTLxM9c0I3DYhbQaHFulJZdywj6EdTmWhogM=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:from:to:content-type:content-transfer-encoding :mime-version:subject:date:x-mailer; b=nipmIJqQqG2wianECiFyiu314zu737glDGxfaFs19aNIhoLIWODV0n4AKOAUhHRM5e 45vCgXxivxKt2GbMD/J/ChbIn8tsxyv5ryyQ0tO7BrRXR/CaHWsAVdWt6z9Ix6TxCO0F iBgZVScJACO5znzfJ7eLSnh7oUNh1tGvk0hVM=
- Reply-to: pygame-users@xxxxxxxx
- Sender: owner-pygame-users@xxxxxxxx
I know the PyOpenGL mailing list might be a better place to ask this
question, but I've had a lot of luck talking to the experienced people
here so I figured I'd try it first.
I'm trying to migrate a game I created from using the Pygame / SDL
software rendering to OpenGL. Before attempting the massive and
complex conversion involved with moving the whole game, I decided to
make a little test program while I learned OpenGL.
In this test, I set up OpenGL to work in 2D and began loading images
into texture objects and drawing textured quads as sprites. I created
a little glSprite class to handle the drawing and translation. At
first its draw routine looked like this:
glPushMatrix()
glTranslate(self.positionx,self.positiony,0)
glBindTexture(GL_TEXTURE_2D, self.texture)
glBegin(GL_QUADS)
glTexCoord2f(0, 1)
glVertex2f(0, 0)
glTexCoord2f(1, 1)
glVertex2f(w, 0)
glTexCoord2f(1, 0)
glVertex2f(w, h)
glTexCoord2f(0, 0)
glVertex2f(0, h)
glEnd()
glPopMatrix()
Note: self.texture is a texture ID of a loaded OpenGL texture object.
My sprite class keeps a dictionary cache and only loads the sprite's
image into a texture if it needs to.
I'd get maybe 200 identical sprites (same texture) onscreen and my CPU
would hit 100% load from Python execution. I looked into what could be
causing this and found out that it's probably function call overhead.
That's 14 external library function calls per sprite draw.
The next thing I tried was to create a display list at each sprite's
initialization. Then my code looked like this:
glPushMatrix()
glTranslate(self.positionx,self.positiony,0)
glCallList(self.displist)
glPopMatrix()
Well, that's nice, down to 4 calls per draw. I was able to push ~500
sprites per frame using this method before the CPU tapped out. I need
more speed than this. My game logic uses 30-40% of the CPU alone and
I'd like to push at least 1000 sprites. What can I do? I've looked
into passing sprites as a matrix with vertex arrays, but forming a
proper vertex array with numpy can sometimes be more trouble than it's
worth. Plus, I can't swap out textures easily mid-draw, so it makes
things much more complex than the simple way I'm doing things now.
Is there any design pattern I could follow that will get me more speed
without sending me off the deep end with complexity.
Thanks,
Zack