[John Belmonte]
I noticed that the "whole file doctests" item didn't make it to the DoctestIdeas page.
There's more on the wiki now, but no real details. Code for it exists in the doctest.py now on tim-doctest-branch.
I was looking for more details about that.
Here's the class documentation. In context (but not in isolation as here), it's clear that it creates a unittest.TestSuite:
def DocFileSuite(*paths, **kw): """Creates a suite of doctest files.
One or more text file paths are given as strings. These should use "/" characters to separate path segments. Paths are relative to the directory of the calling module, or relative to the package passed as a keyword argument.
A number of options may be provided as keyword arguments:
package The name of a Python package. Text-file paths will be interpreted relative to the directory containing this package. The package may be supplied as a package object or as a dotted package name.
setUp The name of a set-up function. This is called before running the tests in each file.
tearDown The name of a tear-down function. This is called after running the tests in each file.
globs A dictionary containing initial global variables for the tests. """
For concrete examples, put the Zope3 source code on a wall, and throw darts at it.
I've found that for tutorial narratives, I really don't want to bother with expected output.
Do you show any output at all? If not, this isn't for you.
For complex software, the output may not always be the same anyway.
We want to focus on invariant (guaranteed) behavior in tests. I think the same is true of good docs. If you can't say anything about the output, you can't test it, and the documentation must be fuzzy too.
Also, it's not fun to manipulate input code that has prompts and is within a multi-line string.
The files DocFileSuite chews on are text files, not necessarily acceptable input to Python (in fact, they never are in practice). So the multi-line string concern doesn't exist in these. Mucking with prompts is indeed a price of entry.
Syntax highlighting is not available, for one thing.
That depends on the editor, but I agree it's at best clumsy to set up.
I've made my own tutorial generator. Given this input script:
## Double hash comments treated as meta comments and not output to ## the tutorial. """ Triple-quote comments that start in the first column are treated as tutorial text. First we'll import some modules: """ import os """ Some code follows. In addition to hash comments, and triple-quote comments not starting in the first column are preserved. Note that code blocks such as functions must end with a blank line, just as is done in the interactive interpreter. """ def foo(): """foo description here""" print 'hello' # be friendly #my_path = 'foo/bar' my_path = os.path.join('foo', 'bar') my_path foo()
the output is:
Triple-quote comments that start in the first column are treated as tutorial text. First we'll import some modules: >>> import os Some code follows. In addition to hash comments, and triple-quote comments not starting in the first column are preserved. Note that code blocks such as functions must end with a blank line, just as is done in the interactive interpreter. >>> def foo(): ... """foo description here""" ... print 'hello' # be friendly ... >>> #my_path = 'foo/bar' >>> my_path = os.path.join('foo', 'bar') >>> my_path 'foo/bar' >>> foo() hello
So if you put your output in a file, you could feed it exactly as-is to DocFileSuite(), and it would create a unittest for you that verified the output in your examples is in fact what it claims to be.