Failing unittest Test cases

Roy Smith roy at panix.com
Tue Jan 10 09:25:16 EST 2006


Peter Otten <__peter__ at web.de> wrote:
> You're right of course. I still think the "currently doesn't pass" marker
> doesn't belong into the test source.

The agile people would say that if a test doesn't pass, you make fixing it 
your top priority.  In an environment like that, there's no such thing as a 
test that "currently doesn't pass".  But, real life is not so kind.

These days, I'm working on a largish system (I honestly don't know how many 
lines of code, but full builds take about 8 hours).  A fairly common 
scenario is some unit test fails in a high level part of the system, and we 
track it down to a problem in one of the lower levels.  It's a different 
group that maintains that bit of code.  We understand the problem, and know 
we're going to fix it before the next release, but that's not going to 
happen today, or tomorrow, or maybe even next week.

So, what do you do?  The world can't come to a screeching halt for the next 
couple of weeks while we're waiting for the other group to fix the problem.  
What we typically do is just comment out the offending unit test.  If the 
developer who does that is on the ball, a PR (problem report) gets opened 
too, to track the need to re-instate the test, but sometimes that doesn't 
happen.  A better solution would be a way to mark the test "known to fail 
because of xyz".  That way it continues to show up on every build report 
(so it's not forgotten about), but doesn't break the build.



More information about the Python-list mailing list