Brian Fisher wrote:
I like this idea. Simon Wittber's suggestion of using the accumulation buffer also sounds promising.FYI, glReadPixels tends to be very slow (fastest is memory around on the video card, second is moving memory around system memory, slowest is moving memory to and from system memory and the card) so if the scenes you are building to be transitioned are dynamic, changing every frame, then you would be much better off keeping the renders on the card, by pulling the rendered scenes into a texture with glCopyTexImage2D, rather than using a pygame surface as an intermediate layer. (with a pygame surface, you'd need to stuff it's data in a texture before rendering anyways, so you save 2 steps in that case) check this tutorial for a blur effect done with render to texture (done in c++, but the gl calls should be easy to follow) http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=36 I'm leaning towards rendering each scene to the back buffer, and then copying the result to a texture. I can then render the textures (via textured quads) to the screen any way I want. I could have the screen image rotate into the next screen. The camera could zoom into the first screen and zoom out of the next screen. Or, I could simply do a nice fade-in/out as I originally described. I'm guessing that the accumulation buffer method would be faster, but using textures should give me more flexibility, right? I don't think speed will be a critical issue, since this technique will only be used for screen transitions. Thanks for the tips guys (and keep them coming, if anyone else has any other ideas). This is my first post to the PyGame mailing list, and its been a lot more productive than GameDev.net, so far (http://www.gamedev.net/community/forums/topic.asp?topic_id=385559). |