[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [pygame] Re: Accessing opengl surface as a numpy array



On Fri, Nov 12, 2010 at 1:25 PM, nbl <nblouveton@xxxxxxxxx> wrote:
In fact, I work on a software implementing some processes on a
"visual" stream.

I don't need all frames' pixels as long as frames are spatially
sampled before all.
Not sure what you mean exactly here.  If you're doing mostly software operations, try to keep your data in software (i.e., only use OpenGL for rendering the output, or don't use it at all).  If you're doing heavy graphics work, do it on the GPU.  If you're really stuck, do the work on the GPU with a shader (although that might be overkill for your purposes; I still don't understand them entirely).
I probably could not read all the buffer but only update relevant
cells in a pre-allocated array.

Is it correct to use glReadPixels to read one pixel at a time, like
this :
glReaderPixels(x, y, 1, 1, ..., ...) ?

or is there a cleaner method ?
Reading pixels one at a time will be slow, even if the memory were already on the client side.  You want to do something like:

#data will be an 800x600x3 array--exactly like a surfarray, except changes will not affect the displayed content
data = "">

It's possible for a couple of these to happen each frame while maintaining 60Hz.  However, in graphics programming, it's generally a bad idea to try excessive amounts of data transfer to and from the graphics card.

Thanks again !
Ian