[Python-Dev] Avoiding cascading test failures

Gregory P. Smith greg at krypto.org
Sun Aug 26 00:59:21 CEST 2007


On Wed, Aug 22, 2007 at 07:44:02PM -0400, Alexandre Vassalotti wrote:
> When I was fixing tests failing in the py3k branch, I found the number
> duplicate failures annoying. Often, a single bug, in an important
> method or function, caused a large number of testcase to fail. So, I
> thought of a simple mechanism for avoiding such cascading failures.
> 
> My solution is to add a notion of dependency to testcases. A typical
> usage would look like this:
> 
>     @depends('test_getvalue')
>     def test_writelines(self):
>         ...
>         memio.writelines([buf] * 100)
>         self.assertEqual(memio.getvalue(), buf * 100)
>         ...
> 
> Here, running the test is pointless if test_getvalue fails. So by
> making test_writelines depends on the success of test_getvalue, we can
> ensure that the report won't be polluted with unnecessary failures.
> 
> Also, I believe this feature will lead to more orthogonal tests, since
> it encourages the user to write smaller test with less dependencies.
> 
> I wrote an example implementation (included below) as a proof of
> concept. If the idea get enough support, I will implement it and add
> it to the unittest module.
> 
> -- Alexandre

I like this idea.

Be sure to have an option to ignore dependancies and run all tests.
Also when skipping tests because a depedancy failed have unittest
print out an indication that a test was skipped due to a dependancy
rather than silently running fewer tests.  Otherwise it could be
deceptive and appear that only one test was affected.

Greg



More information about the Python-Dev mailing list