[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [pygame] Proposed Module: Camera



Oh, I misunderstood what it is that you were going for.  Suggestion 3
is just a function like Camera.get_raw() that would return a string of
the buffer.  It would add like a dozen lines of code, and it would
indeed be useful for integration with other libraries.

Going to HSV is still a necessity for doing any kind of color based
object tracking.  It seems the solution is just to store it as a 24bit
Surface.  Sure it'll look messed up if you try to display it, but
doing it any other way would involve messing with serious parts of
pygame and maybe even SDL.  It's not really a possibility.  I guess
its not a big deal for the user to have to know what the colorspace
is, because its pretty clear that when you're asking for a surface in
HSV, you're getting a surface in HSV.  Probably not too much of a
burden.

To me, the question now is whether its better to have conversion part
of the initial capturing step or as later function.  It's, as usual,
performance vs size and ease of maintenance.

Nirav

On Wed, Jun 18, 2008 at 2:37 AM, Michael <zathras@xxxxxxxxxxxxx> wrote:
> On Monday 16 June 2008 01:42:37 Nirav Patel wrote:
>> Both YUV and HSV would be very useful for vision, but I don't think
>> there's a clean solution to it.
>
> You could choose to simply pass back in a Buffer[1] what you managed to
> get from the camera without conversion as well. As long as you made it clear
> what the type of the data was inside that buffer (ie whether you got YUV, RGB
> etc), then it would be incredibly useful.
>
>   [1] As in one of these: http://docs.python.org/api/bufferObjects.html -
>        which IIRC maps back to this:
>        >>> buffer
>        <type 'buffer'>
>
>> It just doesn't feel right to store
>> it as an RGB surface and leave the user to track what the actual
>> colorspace is.
>
> Indeed that would be an awful situation IMO... Just to clarify - I wasn't
> suggesting that! :)
>
> Having non-RGB surfaces available could be useful of course - but non-RGB
> surfaces stored in surfaces that think they're RGB strikes me as a bad bad
> thing.
>
>> The other issue is that there would still be
>> conversion involved.  Both YUYV and YUV420 would still need some
>> computation to turn it into 24bit packed YUV.
>
> It depends actually. The codec I'm expecting to throw it back out to is Dirac,
> which can handle a variety of input formats (including YUYV and YUV420
> since they're common formats).
>
> Essentially, I'm hoping I can use wrap your camera code and use it as a
> replacement for this code:
>  * http://edit.kamaelia.org/Components/pydoc/Kamaelia.Codec.RawYUVFramer.html
>
> For working with live video for certain cameras. Doing a conversion to RGB and
> back to YUV is less than optimal, especially if the camera can support YUV420
> out and the codec can also accept that.
>
>> Would there then also
>> be support to output YUV from an RGB camera, or would an error be
>> thrown? I could add support to go from * to RGB, YUV, HSV, or
>> Greyscale,
>
> Personally, I would suggest this:
>   * Having conversions for * to RGB, YUV, HSV and greyscale are useful.
>   * Having * to RGB & YUV is more critical. (YUV420 being sufficient for
>      many tasks after all)
>   * Failing that, being able to get access to the raw data that you capture
>      tagged with it's type would enable anyone using your code to use it as
>      a source for doing 1 & 2.
>
>> but would that be making the module too big for inclusion
>> in pygame?  The .so is above 50kb as is.
>
> I've no idea. :-) My personal opinion (I am not the world :) is that:
>    * doing "3" would be a huge benefit for very little effort.
>    * doing "2" & "3" would be a much bigger benefit for a bit more work, and
>       just mirrors a conversion you already make available.
>    * However I'm less convinced of the benefit of the jump from "2" & "3"
>       to "1", "2" and "3".
>
>> There are other libraries (pygstreamer for example) that are much
>> better for encoding to video, but you're right that I do need to come
>> up with something for image processing.
>
> Well, I maintain the python bindings for the Dirac video codec, which is where
> I'm coming from with this request.
>
> Also, it would be really neat to be able to build a simple video link tool by
> doing this:
> Server:
>   Pipeline( VideoCaptureSource(format=raw),
>                 EnsureYUVFrames(),
>                 DiracEncoder(),
>                 SingleServer(port=1600)).run()
>
> Client:
>   Pipeline( TCPClient("server.com", 1600),
>                 DiracDecoder(),
>                 MessageRateLimit(15,15),
>                 VideoOverlay()).run()
>
> Currently all the above Kamaelia components exist, sans the YUV
> output/enforcer...
>
> (The VideoOverlay() component at the end is pygame based as well, so simply
> being able get hold of the raw YUV data (if available) and only converting if
> it isn't, would be really really useful.)
>
>> The image quality improvements are probably the result of your webcam
>> supporting one of the pixelformats added recently.  Cameras do strange
>> things with different pixelformats.
>
> Cool.
>
> Anyway, whatever you decide, this is really neat work and lots of fun to play
> with :-)
>
>
> Michael.
>