[Distutils] test command for setup.py

Greg Ward gward@python.net
Thu Sep 21 21:58:01 2000

On 21 September 2000, Berthold Höllmann said:
> I think it is easier for the developer to have code and tests
> together.

Not if the tests are really exhaustive!  When I write good tests (or at
least what I think are good tests), the test script is roughly the size
of the module being tested.  I don't want my "if __name__ == '__main__'"
block to be as big as the rest of the module.  I think the "test"
command should support "in-situ" testing, but it's *not* enough for
serious tests.

> One can leave out the tests from any distribution by using a
> Manifest file.

Huh?  That loses half the point of writing tests, which is to assure
people building and installing your modules that they probably work as
advertised on their platform.

> But for the developer it is helpful to have a set of
> tests that one can run after changes have been made to the sources to
> enshure that "little" changes have not unexpected side effects.

Yes!  That's the other half of the reason for writing tests.

(You can define "half" here as 40-60, 20-80, whatever.)

> Thats not always an option. I'm writing modules using numerical
> opreations. These require test results to be looked over very
> carefully to decide, whether the differences are significant or not.

Oooh, yes, in a previous life I struggled (briefly) with automated
testing of numerical algorithms.  Not fun.  But asking end-users to
examine the test output and make sure that the difference between
1.0234324 and 1.0234821 is less than the machine epsilon is surely even
less fun.

> Also the underlaying Fortran code makes different decisions on
> different platforms. That makes automatic testing nearly impossible,
> but I want at least a constistant way to generate the needed output,
> and I want to test everything before I install it.

Here's my extreme position:

  If a test isn't fully automated -- i.e., if it takes more than epsilon
  human intelligence to determine if the test passed or failed -- then
  it's not a proper test, it's just a demo.

Granted there are real-world situations that cannot meet this tough
criterion -- testing a curses module, say, or some (many? most?)
numerical algorithms.  However, it is a worthy goal that, with some
work, can often be achieved.  Witness the number of Perl modules on CPAN
for which you can run "make test" and immediately be assured that
everything Just Works on the current platform and Perl interpreter.

Thus, fully automated testing is to be encouraged!  I like the "ok, ok,
not ok, ok" method because it's immediately obvious what's going on,
fairly easy (if tedious) to write tests with no external support, and
shouldn't be too hard to write some sort of test harness framework.  The
framework would have two purposes:
  * make it easy for developers to write tests that output
    "ok, ok, not ok, ok"
  * support the Distutils "test" command that analyses that output
    and ensures that exactly the correct number of "ok" lines, and
    no "not ok" lines, were seen

> All this can be done with my posted, extremly lightweight test.py
> extension for the _command_ subdirectory. The test scripts can written
> to implement/use any testframework you like. Maybe a

OK, lightweight is good.  (Something I occasionally need to be reminded
of...)  I'll definitely take a look at it.

Greg Ward                                      gward@python.net