[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-bugs] #9874 [BridgeDB]: Research/design a way to automate testing of BridgeDB's HTTPS and email distributors
#9874: Research/design a way to automate testing of BridgeDB's HTTPS and email
distributors
-------------------------+-------------------------------------------------
Reporter: isis | Owner: isis
Type: | Status: needs_review
enhancement | Milestone:
Priority: normal | Version:
Component: | Keywords: bridgedb-unittests, automation, ci,
BridgeDB | bridgedb-gsoc-application
Resolution: | Parent ID:
Actual Points: |
Points: |
-------------------------+-------------------------------------------------
Comment (by trygve):
Replying to [comment:19 isis]:
> Replying to [comment:18 trygve]:
> > Thank you for feedback. I've attached a patch to fix the failing test.
I'm new to git (used to cvs, svn and hg), so apologies if I've done this
wrong.
> >
>
> Nope, it's cool. `git checkout -b YOURBRANCH develop` and then linking
to a publicly available remote is generally the best way to go, mostly
because straight patch files obviously lack commit metadata. I ''think''
there is a way to use `git-hg` and `hg-git` to work on a git repo in
mercurial, but then I'm not super familiar with Mercurial.
The writing is on the wall for `hg`, so I'm happy to learn `git`. I'm
probably going to make a lot of rookie mistakes, so please bear with me.
I've used `git format-patch` to create a patch with the commit meta data,
I hope that will be ok? All changes were made from your `fix/9874-email`
branch
> > * I had to perform a few more steps to get everything to work
('leakspin -n 100' and 'scripts/make-ssl-cert'). This was all documented
in the excellent README, but would it be useful to create a little script
that sets up the test environment automatically? You've added something
that does this for Travis CI, but I've been working from the shell,
entering commands manually.
>
> We could move the `before_install` section in `.travis.yml` to some
script, but then we need to tell people to run a script which runs more
scripts before testing. The nice thing about having the command separate
in Travis is that you can see which one failed, versus a whole setup
script failing. I'm not really sure which one is better. Unless you have
more ideas?
I don't think I know enough about Travis to answer that intelligently ;)
>
> It occurs to me that your tests might fail on random machines, like if
port 2525 was already in use, or they have a weird firewall configuration.
Perhaps we should only expect them to run in special environments?
Port 2525 already in use should cause the tests to fail with an 'Address
already in use' (or whatever) exception, which should be easy to debug.
Not so much if iptables is doing '-j DROP'. I'm not sure how to mitigate
this, other than by adding some comments to the README and the tests
themselves.
>
> We could do `export TESTING_BRIDGEDB=1` in the test environment setup
script and Travis, and then check for that in your tests, rather than
checking for `os.env.get("TRAVIS")` and only running on Travis. That way,
your tests would only run if the test environment setup script had been
run, or on CI machines, but would skip if someone tried to run all the
tests on a random machine.
>
> Or we could just add the `sed` commands to the `README` and wish
everyone the best of luck getting this crazy program to run. :/
>
I guess what we're trying to avoid is unexplained, hard-to-debug failures
when people run the unit tests. If you agree, I can submit a patch which
does the following:
* add a few comments to the end of the README
* update `test_smtp.py` and `test_https.py` to raise `SkipTest` if the
`TESTING_BRIDGEDB` (or `TRAVIS`?) environment variable is not set
* find a way to make `test_https.py` behave if `mechanize` is not
installed (it's in `.test.requirements.txt`, but not `requirements.txt`).
Raising `SkipTest` isn't possible at the moment because the import occurs
at the top of the file before any tests have been run. I can probably
defer the `import` until the test `setup()` function, and raise `SkipTest`
if importing it fails
* update `test_smtp.py` and `test_https.py` to raise `SkipTest` if
bridgedb is not running (like what `test_bridgedb.py` does)
* update `.test.requirements.txt` to include `ipaddr` (the one hosted on
googlecode.com)
* add a `setup_test_environment.sh` (or whatever) bash script which sets
up the test environment i.e. uses sed to modify bridgedb.conf, installs
`.test.requirements`, runs `leekspin`, generates ssl certs and sets the
required environment variable. Actually I'm not sure about that last bit:
the script can set the variable in its own environment, but it can't
affect the parent's environment, so as soon as the script returns the
variable will be lost. How would you expect these to tests to actually be
executed? When `bridgedb test` is run, or using some other method e.g.
directly from the shell? Adding a new command line parameter to bridgedb?
Adding a 2nd `run_integration_tests.sh` script? I guess I'm missing the
bigger picture as to how other people may use this (if at all). I don't
want to suggest making invasive or unnecessary changes unless you think
there's a good reason (or no alternative)
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/9874#comment:20>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs