"Greg" == Greg Ward
writes:
Greg> On 18 September 2000, Berthold Höllmann said: >> I'm thinking about providing a framework for pre-install >> testing for distutils enabled Python packages. I think about a >> new "test" command, which inserts the lib build path into >> PYTHONPATH and runs specified tests. Is anyone else working on >> something like this, and what are your opinions and >> requirements on this? Greg> I've been thinking about it from Day 1, personally. Here are Greg> some of my opinions/requirements: Greg> * serious tests don't belong in the module being tested, Greg> because test code tends to be at least as long as the code Greg> being tested I think it is easier for the developer to have code and tests together. One can leave out the tests from any distribution by using a Manifest file. But for the developer it is helpful to have a set of tests that one can run after changes have been made to the sources to enshure that "little" changes have not unexpected side effects. Greg> ... Greg> * the requirements on what a test script may output should Greg> be fairly strict. My favourite is loosely based on Perl's Greg> testing convention, which is a great success largely because Greg> the standard-way-to-build- Perl-modules includes a "make Greg> test" step, and it's always immediately clear if everything Greg> passed. My proposed convention is this: each test script Greg> outputs a line saying how many tests are expected, then a Greg> series of lines that start with either "ok" or "not ok". If Greg> there are any "not ok" lines, the test script fails; if the Greg> number of "ok" lines != the number of expected tests, the Greg> test script fails. Each "ok" or "not ok" may be followed by Greg> ":" and an arbitrary string, which makes it a lot easier to Greg> track down problems. (The Perl convention is "ok 1", "ok 2", Greg> ... "ok N", which makes it really awkward to track down Greg> problems.) Any lines not starting with "ok" or "not ok" are Greg> ignored. Thats not always an option. I'm writing modules using numerical opreations. These require test results to be looked over very carefully to decide, whether the differences are significant or not. Also the underlaying Fortran code makes different decisions on different platforms. That makes automatic testing nearly impossible, but I want at least a constistant way to generate the needed output, and I want to test everything before I install it. Greg> ... Greg> * we might also want to accomodate a Python-style test Greg> suite, where you compare "known good" output of a script Greg> with the current output. I'm not a fan of this methodology, Greg> but sometimes it's easier, and it may have some currency in Greg> the Python community outside of Lib/test. It should be the Greg> developer's choice what style of test suite he wants to Greg> write. All this can be done with my posted, extremly lightweight test.py extension for the _command_ subdirectory. The test scripts can written to implement/use any testframework you like. Maybe a
python setup.py test
command is only a convenience command but it gives you a starting point when maintaining code you wrote ages ago, or somebody else wrote. Greetings Berthold -- bhoel@starship.python.net / http://starship.python.net/crew/bhoel/ It is unlawful to use this email address for unsolicited ads (USC Title 47 Sec.227). I will assess a US$500 charge for reviewing and deleting each unsolicited ad.