[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [pygame] run_tests.py subprocess mode and Build Page extensions



hi,

I think there are various groups of tests, including:
- requiring different dependencies (numpy, numeric, a cd drive,
working sound, etc).
- manual intervention required.
- long running tests.
- opens window
- speed tests
- etc,


Maybe we can either use a naming scheme, attributes on the methods, or
perhaps different directories?

Or perhaps different ways of tagging, or querying tests?  So I could
ask the test runner to only run tests for a given query?

Perhaps --exclude= --include= options?  So it could
--exclude=numeric,numpy which would substring exclude based on file
name, module name, class name, method name.  As well as being able to
exclude on just one criteria, eg --exclude=file:mixer_test.py
--exclude=method:test_make_surface  --exclude=tags:longrunning

We can already run tests at the module level individually, so I don't
think we need a --include option as much?  However it would still be
handy for running just one test from a module, or running just tests
with tags set.


For tagging tests... maybe we could do?

def test_bla(self):
    print 'lala'
test_bla.tags = "incomplete,manual,longrunning,speed"

I don't know if it's worth putting the existing incomplete system into
tags.  Are they easy enough to regenerate?

Also, maybe we can get by with just using a naming system, instead of tags?

I think tags could be quite useful... and like on many websites I
think we will find more uses for them as time goes on -- and as you
add different groups of tests as part of your project, they will be
useful (the manual intervention tests, and the speed tests).  Also
they could be used for requirements... eg
.tags="display32bit,cdrom,moonfull" could be used to say a test
requires a 32 bit display, a cdrom drive, and a full moon to run.




- failing numeric or numpy tests is fine if they are not available.
However we need a way for the test runner to ignore these tests, so we
know 'everything is ok' if this configuration is on purpose.  So we
can test the case when numeric, or numpy is not installed.

.........
Which leads to a different point about running the tests with
different configurations of pygame.  Different configurations include:
- optional modules not compiled (eg maybe the surfarray, or imageext
modules aren't compiled in)
- using different drivers (video, sound, etc).
- different include/library paths, and other random setup differences

So for the build bot, it might be setup to compile pygame in various
ways, and run tests in various ways depending on the configuration.

Maybe for the different drivers, run_tests could be modified to run
the tests with different sets of drivers?  so on windows it could try
running the tests with the windib driver, then with the directx
driver.


.......
Supplying configuration, and system information would be quite useful
for debugging, and for allowing people to submit test results.

So the buildbot could take in test results, without having to run the
tests.  This is useful for testing timing, and also for testing on a
much wider scale.

So the default runtests.py might ask people if they want to submit
their test results -- and configuration information.  Of course it
should be optional for privacy reasons.

If the information was returned as urlencoded key=value pairs, a
database can be easily made, and we could query it for things like:
- how fast something ran on win98, osx 10.4.x etc.
- see if a revision has been tested on a certain platform.
- plot/graph the speed of things over time.



... ok, enough bytes for one email.
cu,