Hardware take on software testing.

Michele Simionato mis6 at pitt.edu
Tue Jun 10 15:38:08 CEST 2003

Peter Hansen <peter at engcorp.com> wrote in message news:<3EE4F2F0.2EC4FBA5 at engcorp.com>...
> I definitely won't argue against the fact of subtle bugs that are 
> extremely hard to catch with any testing done in advance.  It's just
> not a good idea to write code that is prone to such problems, however,
> so I'd encourage a different approach to writing code, which does
> not lead to such problems very often.

Easy to say, but hardly possible. There are areas of programing that are
*hard*, unrespectively of the approach. I mean, a bad approach can make
hard a simple thing, but an hard thing cannot be made easy by the approach 
> > Let me take a concrete example, i.e. my recipe to solve the metaclass
> > conflict I posted recently. Here the script is less than 20 line long.
> > However, I cannot simply write a comprehensive test suite, since there
> > are TONS of possibilities and it is extremely difficult for me to imagine
> > all the use cases. I can write a big test suite of hundreds of lines,
> > but still cannot be completely sure. I cannot spend weeks in writing the
> > test suite for 20 lines of code!
> But those 20 lines were not written test-first.  

NO! THEY WERE!! That is the main reason why I was induced to submit my 
original post.

I wrote those 20 lines test firsts, deciding the use cases I wanted to
implement before writing any line of code. TDD worked in the sense that
the code I wrote after having written the tests was conform to the 
specifications. Then I was happy and I posted the recipe. Immediately 
somebody came up with many very subtle corner cases where the recipe 
couldn't work. Not real bugs but limitations or warts. Some of them I
knew, other where inexpected. Now my point is that TDD gave me a 
sense of false security (similar to what you get when your program 
compile in a compiled language) and made me too much confident.

In more general terms, since at every step of TDD your program works and it 
is bug free (in the sense you write each test first and you make sure it is 
passed) you have the tendence to skip the initial step of overall design. 
Now, 99% of the times this is a good thing and I like the agile philosophy; 
nevertheless there are cases in which the confidence coming from TDD can 
beat you. Still, I think TDD is way better than alternatives metodologies; 
and especially I don't think the "initial big design" approach can ever work; 
nevertheless TDD is not the panacea for everything.

In the same sense that one should be skeptical about the confidence he gets 
when the program compile, he should be skeptical about the confidence he gets 
when the program passes the test suite.

> If you were to imagine only *one* specific use case, the most important 
> one for your own
> purposes, and write a single test that exposes the most obvious and
> easiest aspect of that one use case, and then implemented just enough
> code to pass that one single test, how certain would you be that the
> code had lots of bugs?  If you repeated that step over and over again,
> constantly retesting, refactoring, and only adding code that already 
> had a test for it, do you think you'd be quite so unsure about it?
> Now I grant that if someone comes along and uses that 20 lines, no 
> matter how many tests you've written for it, in a way you haven't
> envisioned (and which is therefore not covered by your tested use
> cases), then you might start to sweat, and even _expect_ bugs.  I'm
> not sure you should, because the code is probably extremely well-
> designed and robust at that point, but it's possible the new type
> of use will expose an edge case you hadn't quite noticed or something.
> So you write another test to catch it, refactor the code again, and
> go back to sleep.
Yes, it is what I did. Now I have much more working tests but, ironically, 
much less confidence in them. Makes any sense? ;)


More information about the Python-list mailing list