[Python-Dev] Avoiding cascading test failures
Collin Winter
collinw at gmail.com
Wed Aug 29 05:15:38 CEST 2007
On 8/22/07, Alexandre Vassalotti <alexandre at peadrop.com> wrote:
> When I was fixing tests failing in the py3k branch, I found the number
> duplicate failures annoying. Often, a single bug, in an important
> method or function, caused a large number of testcase to fail. So, I
> thought of a simple mechanism for avoiding such cascading failures.
>
> My solution is to add a notion of dependency to testcases. A typical
> usage would look like this:
>
> @depends('test_getvalue')
> def test_writelines(self):
> ...
> memio.writelines([buf] * 100)
> self.assertEqual(memio.getvalue(), buf * 100)
> ...
This definitely seems like a neat idea. Some thoughts:
* How do you deal with dependencies that cross test modules? Say test
A depends on test B, how do we know whether it's worthwhile to run A
if B hasn't been run yet? It looks like you run the test anyway (I
haven't studied the code closely), but that doesn't seem ideal.
* This might be implemented in the wrong place. For example, the [x
for x in dir(self) if x.startswith('test')] you do is most certainly
better-placed in a custom TestLoader implementation. That might also
make it possible to order tests based on their dependency graph, which
could be a step toward addressing the above point.
But despite that, I think it's a cool idea and worth pursuing. Could
you set up a branch (probably of py3k) so we can see how this plays
out in the large?
Collin Winter
More information about the Python-Dev
mailing list