[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [pygame] colorspace and other vision functions, where do they belong?



On Thu, Jul 10, 2008 at 11:59 AM, René Dudfield <renesd@xxxxxxxxx> wrote:
> On Thu, Jul 10, 2008 at 12:42 PM, Nirav Patel <olpc@xxxxxxxxxxxxxx> wrote:
>
> ah, ok.  It's a fairly complicated function signature, that can do a
> number of different things.
>
> yeah, if you could add it back, that would be good.  There are tests
> there for that function... I hope they cover everything, but not sure.
>  I'll have to have a look later.
>
> Are you talking about this part?
> "Or it can be used to just count the number of pixels within the
> threshold if you set change_return to False. "
>
> That's also useful for things like your average_color thing.  So you
> can see how much of the surface is close to red for example -- without
> writing to the destination surface.
>
> Yeah, some better documentation of it would be nice too :)

According to the documentation, the function is as follows:
pygame.transform.threshold(DestSurface, Surface, color, threshold =
(0,0,0,0), diff_color = (0,0,0,0), change_return = True): return
num_threshold_pixels

However, the arguements it actually parses are:
if (!PyArg_ParseTuple (arg, "O!O!O|OOiO!", &PySurface_Type, &surfobj,
                       &PySurface_Type, &surfobj2,
                       &rgba_obj_color,  &rgba_obj_threshold,
&rgba_obj_diff_color,
                       &change_return,
                       &PySurface_Type, &surfobj3))

This would suggest that the function should be:
pygame.transform.threshold(DestSurface, Surface, color, threshold =
(0,0,0,0), diff_color = (0,0,0,0), change_return = True, Surface =
None): return num_threshold_pixels

The way the function was written, when given the optional third
surface, it would use the colors in that rather than the "color"
specified in the function to check against.  As far as I can tell,
this isn't tested or documented anywhere, it just exists in the code.
It does seem like a useful thing to have though, so I'll write it back
into my version of it and document it.

> I think there are a couple of ways to do it...
>
> 1.mock objects/null drivers.  So instead of using real hardware, you
> code can make code which pretends to be the hardware.  I don't think
> you have to do it at the v4l level, but maybe that can give you some
> ideas.
>
> This can be made easier by overriding methods.  eg, subclass your
> object, and replace the get_raw function, with a mock one.
>
>
> 2. functional tests... assume the hardware is present, and figure out
> some automated tests you can do.
>
> Also you can unittest just parts that are easy to test.  eg, the
> colorspace conversion stuff.
>
> 3. manual inspection... nicholas is going to work on a test framework
> for this... but up until now we just use example programs and look at
> them.  These are useful when there's no other possible way to test
> things... but obviously not as good as automated tests.

There is a mock driver, vivi, that I've been testing with.  This can
be used for most of the testing, but not all of it, since it only
supports YUYV.  It could then not be used to test RGB24, RGB444,
YUV420, or SBGGR8, and also some parts of colorspace conversion from
those pixelformats to RGB, YUV, and HSV.

There is also the trouble that being sure its working correctly
depends on human inspection for things like, do the colors look ok, is
there tearing going on, is part of the image cut off, etc.  I can do
automated testing on a lot of it though, like making sure the size of
the Surfaces returned are what I expect and so on.