Tim Peter's doctest compared to Quixote unit testing (was Re: [Python-Dev] Quixote unit testing docs)
Mon, 4 Dec 2000 18:59:54 +0100 (MET)
> ... I ... actually wrote some documentation for
> the Quixote unittest.py; please see
> comes out to around 290 lines; I can post it to this list if that's
After reading Andrews docs, I think Quixote basically offers
three additional features if compared with Tim Peters 'doctest':
1. integration of Skip Montanaro's code coverage analysis.
2. the idea of Scenario objects useful to share the setup needed to
test related functions or methods of a class (same start condition).
3. Some useful functions to check whether the result returned
by some test fullfills certain properties without having to be
so explicite, as cut'n'paste from the interactive interpreter
session would have been.
As I've pointed out before in private mail to Jeremy I've used Tim Peters
'doctest.py' to accomplish all testing of Python apps in our company.
In doctest each doc string is an independent unit, which starts fresh.
Sometimes this leads to duplicated setup stuff, which is needed
to test each method of a set of related methods from a class.
This is distracting, if you intend the test cases to take their
double role of being at same time useful documentational examples
for the intended use of the provided API.
Tim_one: Do you read this? What do you think about the idea to add
something like the following two functions to 'doctest':
use_module_scenario() -- imports all objects created and preserved during
execution of the module doc string examples.
use_class_scenario() -- imports all objects created and preserved during
the execution of doc string examples of a class. Only allowed in doc
string examples of methods.
This would allow easily to provide the same setup scenario to a group
of related test cases.
AFAI understand doctest handles test-shutdown automatically, iff
the doc string test examples leave no persistent resources behind.