[sapug] Python Resource: Testing In Python Mailing List.

Tim Wegener twegener at fastmail.fm
Wed May 30 15:22:34 CEST 2007

On Wed, 2007-05-30 at 11:06 +0930, Chris Foote wrote:
> On Tue, 29 May 2007, Daryl Tester wrote:
> > I stumbled across this the other month - the "Testing In Python"
> > mailing list.  For all you Mock Objectivists and people who, like
> > horror, test the code you write.  There's even some interesting
> > snippets from Jim Fulton (*The* Zope Architect) on their testing
> > strategies.
> >
> > <http://lists.idyll.org/listinfo/testing-in-python>
> I've toyed with PyUnit a little bit (http://pyunit.sourceforge.net)
> but I don't really understand test driven development.

Here are some other testing frameworks worth looking at:

Doctest is in the standard library. Writing tests is simply a matter of
writing dialogs of what would happen at the interactive python prompt
and put that in the function docstring. 

Py.Test is an alternative to PyUnit. Writing tests is as simple as
writing assert statements.

Nose is another test system along the lines of Py.Test:

> To me it seems like you'd spend far too much extra time getting your code
> to fit the tests than getting real work done, especially with projects
> than involve lots of external components like databases, distributed
> procedures and actions that take place in the background using threads.
> Maybe I'm missing a big piece of the testing puzzle :-(

For external components it is often useful to use stubs/mocks etc
instead. Stubs/mocks are basically fake versions of these components
where you can easily control their behaviour. This can help make test
suites run faster, and makes it easier to control behaviour that would
be non-deterministic in the final application. 

> Having code not break when getting modified by herds of software engineers
> would have to be a good reason to use it, but aren't its benefits very
> limited for one or two man projects ?

All code needs to get debugged. When you debug code you need to prove to
yourself that it is working where it wasn't before. You might as well
formalise proof as a test. Doing this up front clarifies the problem,
and shows whether the bug, as defined by the test, has been fixed. A
comprehensive set of tests is essentially an executable spec. Any
untested behaviour is undefined in terms of the spec. 

Even for projects implemented by one or two people the tests are very
valuable as spec, regression testing and as a form of API documentation
that is guaranteed to be in sync with the implementation.

In my experience a good rule of thumb is: if it hasn't been tested it's
almost definitely broken.

Mind you, it's easy to say all of this, but it takes a bit of discipline
and faith in the process to launch into the test-first approach. 


More information about the sapug mailing list