Failing unittest Test cases

Duncan Booth duncan.booth at invalid.invalid
Tue Jan 10 04:39:25 EST 2006


Scott David Daniels wrote:

> There has been a bit of discussion about a way of providing test cases
> in a test suite that _should_ work but don't.  One of the rules has been
> the test suite should be runnable and silent at every checkin.  Recently
> there was a checkin of a test that _should_ work but doesn't.  The
> discussion got around to means of indicating such tests (because the
> effort of creating a test should be captured) without disturbing the
> development flow.

I like the concept. It would be useful when someone raises an issue which 
can be tested for easily but for which the fix is non-trivial (or has side 
effects) so the issue gets shelved. With this decorator you can add the 
failing unit test and then 6 months later when an apparently unrelated bug 
fix actually also fixes the original one you get told 'The thrumble doesn't 
yet gsnort(see issue 1234)' and know you should now go and update that 
issue.

It also means you have scope in an open source project to accept an issue 
and incorporate a failing unit test for it before there is an acceptable 
patch. This shifts the act of accepting a bug from putting it onto some 
nebulous list across to actually recognising in the code that there is a 
problem. Having a record of the failing issues actually in the code would 
also helps to tie together bug fixes across different development branches.

Possible enhancements:

add another argument for associated issue tracker id (I know you could put 
it in the string, but a separate argument would encourage the programmer to 
realise that every broken test should have an associated tracker entry), 
although I suppose since some unbroken tests will also have associated 
issues this might just be a separate decorator.

add some easyish way to generate a report of broken tests.



More information about the Python-list mailing list