unit testing and Python regression test

There was recently some idle chatter in Guido's living room about using a unit testing framework (like PyUnit) for the Python regression test suite. We're also writing tests for some DC projects, and need to decide what framework to use. Does anyone have opinions on test frameworks? A quick web search turned up PyUnit (pyunit.sourceforge.net) and a script by Tres Seaver that allows implements xUnit-style unit tests. Are there other tools we should consider? Is anyone else interested in migrating the current test suite to a new framework? I hope the new framework will allow us to improve the test suite in a number of ways: - run an entire test suite to completion instead of stopping on the first failure - clearer reporting of what went wrong - better support for conditional tests, e.g. write a test for httplib that only runs if the network is up. This is tied into better error reporting, since the current test suite could only report that httplib succeeded or failed. Jeremy

On Fri, Dec 01, 2000 at 02:27:14PM -0500, Jeremy Hylton wrote:
Someone remembered my post of 23 Nov, I see... The only other test framework I know of is the unittest.py inside Quixote, written because we thought PyUnit was kind of clunky. Greg Ward, who primarily wrote it, used more sneaky interpreter tricks to make the interface more natural, though it still worked with Jython last time we checked (some time ago, though). No GUI, but it can optionally show the code coverage of a test suite, too. See http://x63.deja.com/=usenet/getdoc.xp?AN=683946404 for some notes on using it. Obviously I think the Quixote unittest.py is the best choice for the stdlib. --amk

Is there any documentation for the Quixote unittest tool? The Example page is helpful, but it feels like there are some details that are not explained. Jeremy

"AMK" == Andrew Kuchling <akuchlin@mems-exchange.org> writes:
AMK> On Fri, Dec 01, 2000 at 03:55:28PM -0500, Jeremy Hylton wrote:
AMK> I don't believe we've written docs at all for internal use. AMK> What details seem to be missing? Details: - I assume setup/shutdown are equivalent to setUp/tearDown - Is it possible to override constructor for TestScenario? - Is there something equivalent to PyUnit self.assert_ - What does parse_args() do? - What does run_scenarios() do? - If I have multiple scenarios, how do I get them to run? Jeremy

On Fri, Dec 01, 2000 at 04:21:27PM -0500, Jeremy Hylton wrote:
- I assume setup/shutdown are equivalent to setUp/tearDown
Correct.
- Is it possible to override constructor for TestScenario?
Beats me; I see no reason why you couldn't, though.
- Is there something equivalent to PyUnit self.assert_
Probably test_bool(), I guess: self.test_bool('self.run.is_draft()') asserts that self.run.is_draft() will return true. Or does self.assert_() do something more?
These 3 questions are all related, really. At the bottom of our test scripts, we have the following stereotyped code: if __name__ == "__main__": (scenarios, options) = parse_args() run_scenarios (scenarios, options) parse_args() ensures consistent arguments to test scripts; -c measures code coverage, -v is verbose, etc. It also looks in the __main__ module and finds all subclasses of TestScenario, so you can do: python test_process_run.py # Runs all N scenarios python test_process_run.py ProcessRunTest # Runs all cases for 1 scenario python test_process_run.py ProcessRunTest:check_access # Runs one test case # in one scenario class --amk

"FL" == Fredrik Lundh <fredrik@pythonware.com> writes:
FL> andrew kuchling wrote:
FL> the pythonware teams agree -- we've been using an internal FL> reimplementation of Kent Beck's original Smalltalk work, but FL> we're switching to unittest.py. Can you provide any specifics about what you like about unittest.py (perhaps as opposed to PyUnit)? Jeremy

Prodded by Jeremy, I went and actually wrote some documentation for the Quixote unittest.py; please see <URL:http://www.amk.ca/python/unittest.html>. The HTML is from a manually hacked Library Reference, so ignore the broken image links and other formatting goofyness. In case anyone needs it, the LaTeX is in /files/python/. The plain text version comes out to around 290 lines; I can post it to this list if that's desired. --amk

Hi all, Andrew Kuchling:
After reading Andrews docs, I think Quixote basically offers three additional features if compared with Tim Peters 'doctest': 1. integration of Skip Montanaro's code coverage analysis. 2. the idea of Scenario objects useful to share the setup needed to test related functions or methods of a class (same start condition). 3. Some useful functions to check whether the result returned by some test fullfills certain properties without having to be so explicite, as cut'n'paste from the interactive interpreter session would have been. As I've pointed out before in private mail to Jeremy I've used Tim Peters 'doctest.py' to accomplish all testing of Python apps in our company. In doctest each doc string is an independent unit, which starts fresh. Sometimes this leads to duplicated setup stuff, which is needed to test each method of a set of related methods from a class. This is distracting, if you intend the test cases to take their double role of being at same time useful documentational examples for the intended use of the provided API. Tim_one: Do you read this? What do you think about the idea to add something like the following two functions to 'doctest': use_module_scenario() -- imports all objects created and preserved during execution of the module doc string examples. use_class_scenario() -- imports all objects created and preserved during the execution of doc string examples of a class. Only allowed in doc string examples of methods. This would allow easily to provide the same setup scenario to a group of related test cases. AFAI understand doctest handles test-shutdown automatically, iff the doc string test examples leave no persistent resources behind. Regards, Peter

Hi, everyone. A potential customer has asked whether there are any plans to re-organize and rationalize the Python standard library. If there are any firms plans, and a schedule (however tentative), I'd be grateful for a pointer. Thanks, Greg

Alas, none that I know of except the ineffable Python 3000 schedule. :-) --Guido van Rossum (home page: http://www.python.org/~guido/)

[Jeremy Hylton]
My own doctest is loved by people other than just me <wink>, but is aimed at ensuring that examples in docstrings work exactly as shown (which is why it starts with "doc" instead of "test").
Is anyone else interested in migrating the current test suite to a new framework?
Yes.
doctest does that.
- clearer reporting of what went wrong
Ditto.
A doctest test is simply an interactive Python session pasted into a docstring (or more than one session, and/or interspersed with prose). If you can write an example in the interactive shell, doctest will verify it still works as advertised. This allows for embedding unit tests into the docs for each function, method and class. Nothing about them "looks like" an artificial test tacked on: the examples in the docs *are* the test cases. I need to try the other frameworks. I dare say doctest is ideal for computational functions, where the intended input->output relationship can be clearly explicated via examples. It's useless for GUIs. Usefulness varies accordingly between those extremes (doctest is natural exactly to the extent that a captured interactive session is helpful for documentation purposes). testing-ain't-easy-ly y'rs - tim

On Fri, Dec 01, 2000 at 02:27:14PM -0500, Jeremy Hylton wrote:
Someone remembered my post of 23 Nov, I see... The only other test framework I know of is the unittest.py inside Quixote, written because we thought PyUnit was kind of clunky. Greg Ward, who primarily wrote it, used more sneaky interpreter tricks to make the interface more natural, though it still worked with Jython last time we checked (some time ago, though). No GUI, but it can optionally show the code coverage of a test suite, too. See http://x63.deja.com/=usenet/getdoc.xp?AN=683946404 for some notes on using it. Obviously I think the Quixote unittest.py is the best choice for the stdlib. --amk

Is there any documentation for the Quixote unittest tool? The Example page is helpful, but it feels like there are some details that are not explained. Jeremy

"AMK" == Andrew Kuchling <akuchlin@mems-exchange.org> writes:
AMK> On Fri, Dec 01, 2000 at 03:55:28PM -0500, Jeremy Hylton wrote:
AMK> I don't believe we've written docs at all for internal use. AMK> What details seem to be missing? Details: - I assume setup/shutdown are equivalent to setUp/tearDown - Is it possible to override constructor for TestScenario? - Is there something equivalent to PyUnit self.assert_ - What does parse_args() do? - What does run_scenarios() do? - If I have multiple scenarios, how do I get them to run? Jeremy

On Fri, Dec 01, 2000 at 04:21:27PM -0500, Jeremy Hylton wrote:
- I assume setup/shutdown are equivalent to setUp/tearDown
Correct.
- Is it possible to override constructor for TestScenario?
Beats me; I see no reason why you couldn't, though.
- Is there something equivalent to PyUnit self.assert_
Probably test_bool(), I guess: self.test_bool('self.run.is_draft()') asserts that self.run.is_draft() will return true. Or does self.assert_() do something more?
These 3 questions are all related, really. At the bottom of our test scripts, we have the following stereotyped code: if __name__ == "__main__": (scenarios, options) = parse_args() run_scenarios (scenarios, options) parse_args() ensures consistent arguments to test scripts; -c measures code coverage, -v is verbose, etc. It also looks in the __main__ module and finds all subclasses of TestScenario, so you can do: python test_process_run.py # Runs all N scenarios python test_process_run.py ProcessRunTest # Runs all cases for 1 scenario python test_process_run.py ProcessRunTest:check_access # Runs one test case # in one scenario class --amk

andrew kuchling wrote:
the pythonware teams agree -- we've been using an internal reimplementation of Kent Beck's original Smalltalk work, but we're switching to unittest.py.
Obviously I think the Quixote unittest.py is the best choice for the stdlib.
+1 from here. </F>

"FL" == Fredrik Lundh <fredrik@pythonware.com> writes:
FL> andrew kuchling wrote:
FL> the pythonware teams agree -- we've been using an internal FL> reimplementation of Kent Beck's original Smalltalk work, but FL> we're switching to unittest.py. Can you provide any specifics about what you like about unittest.py (perhaps as opposed to PyUnit)? Jeremy

Prodded by Jeremy, I went and actually wrote some documentation for the Quixote unittest.py; please see <URL:http://www.amk.ca/python/unittest.html>. The HTML is from a manually hacked Library Reference, so ignore the broken image links and other formatting goofyness. In case anyone needs it, the LaTeX is in /files/python/. The plain text version comes out to around 290 lines; I can post it to this list if that's desired. --amk

Hi all, Andrew Kuchling:
After reading Andrews docs, I think Quixote basically offers three additional features if compared with Tim Peters 'doctest': 1. integration of Skip Montanaro's code coverage analysis. 2. the idea of Scenario objects useful to share the setup needed to test related functions or methods of a class (same start condition). 3. Some useful functions to check whether the result returned by some test fullfills certain properties without having to be so explicite, as cut'n'paste from the interactive interpreter session would have been. As I've pointed out before in private mail to Jeremy I've used Tim Peters 'doctest.py' to accomplish all testing of Python apps in our company. In doctest each doc string is an independent unit, which starts fresh. Sometimes this leads to duplicated setup stuff, which is needed to test each method of a set of related methods from a class. This is distracting, if you intend the test cases to take their double role of being at same time useful documentational examples for the intended use of the provided API. Tim_one: Do you read this? What do you think about the idea to add something like the following two functions to 'doctest': use_module_scenario() -- imports all objects created and preserved during execution of the module doc string examples. use_class_scenario() -- imports all objects created and preserved during the execution of doc string examples of a class. Only allowed in doc string examples of methods. This would allow easily to provide the same setup scenario to a group of related test cases. AFAI understand doctest handles test-shutdown automatically, iff the doc string test examples leave no persistent resources behind. Regards, Peter

Hi, everyone. A potential customer has asked whether there are any plans to re-organize and rationalize the Python standard library. If there are any firms plans, and a schedule (however tentative), I'd be grateful for a pointer. Thanks, Greg

Alas, none that I know of except the ineffable Python 3000 schedule. :-) --Guido van Rossum (home page: http://www.python.org/~guido/)

[Jeremy Hylton]
My own doctest is loved by people other than just me <wink>, but is aimed at ensuring that examples in docstrings work exactly as shown (which is why it starts with "doc" instead of "test").
Is anyone else interested in migrating the current test suite to a new framework?
Yes.
doctest does that.
- clearer reporting of what went wrong
Ditto.
A doctest test is simply an interactive Python session pasted into a docstring (or more than one session, and/or interspersed with prose). If you can write an example in the interactive shell, doctest will verify it still works as advertised. This allows for embedding unit tests into the docs for each function, method and class. Nothing about them "looks like" an artificial test tacked on: the examples in the docs *are* the test cases. I need to try the other frameworks. I dare say doctest is ideal for computational functions, where the intended input->output relationship can be clearly explicated via examples. It's useless for GUIs. Usefulness varies accordingly between those extremes (doctest is natural exactly to the extent that a captured interactive session is helpful for documentation purposes). testing-ain't-easy-ly y'rs - tim
participants (8)
-
Andrew Kuchling
-
Fred L. Drake, Jr.
-
Fredrik Lundh
-
Greg Wilson
-
Guido van Rossum
-
Jeremy Hylton
-
pf@artcom-gmbh.de
-
Tim Peters