Hardware take on software testing.
Donald 'Paddy' McCarthy
paddy3118 at blueyonder.co.ukNOTthisBIT
Mon Jun 9 11:46:37 CEST 2003
Peter Hansen wrote:
> Terence Way wrote:
>>Cleanroom, and much of the software development methodologies, insist
>>that bugs get more expensive as time goes on, so it's cost-effective
>>to spend time up-front to remove all bugs. XP refutes that, and says
>>bugs cost the same no matter what: it's only software, just change it
>>when you find a bug.
> I'm unsure that's the way XP looks at it. XP does three things, that
> I can see, in relation to this.
> One is it avoids the very expensive "big design up-front", along with
> the masses of unused documentation, the length design sessions, and
> so forth, and therefore reduces the cost of coding, where the bugs
> are actually inserted, to a more reasonable level (and, incidentally,
> reduces the cost of bugs at the same time).
> The second is that it puts a project into maintenance mode
> from the first week, and therefore flattens the supposedly
> rising curve of bug-cost as seen in traditional projects to the
> level seen during maintenance. (Hmm... never heard it described
> that way... maybe that's not quite true, but it seems valid at
> the moment.)
> The third, and more important, is that it tries hard to avoid
> inserting bugs in the first place, with the extensive unit and
> acceptance tests, and test-driven development. If you don't
> put most of the bugs in in the first place, you don't really
> get to worrying about the cost of the few remaining ones (and,
> as you point out, you just change it.)
I guess one of the things I question about XP then is this notion that
you only add good code to your program because of the quality of your
tests; and that prudent re-factoring also gets rid of unnecessary code.
If you are doing the above manually then I think that you could gain a
lot by using coverage tools to help assess the quality of your tests not
necessarily w.r.t. a spec, but in how well those tests cover your
program. Coverage tools can be applied automatically for simple but
useful results, or can give more detailed results with manual
intervention. If someone is doing XP for a commercial project, why not
invest in coverage tools? A coverage report could be part of every peer
review, or generated as a matter of course during development if
runtimes are not too long.
Random constrained generation wouldn't have to be seen as outside XP or
TDD. If the language used for writing tests had good support for random
generation, and software engineers were trained in the technique, it
would just become another type of arrow in their quiver on their hunt
for bugs on their quest for quality software. (Whoops, getting a little
flowery towards the end there :-)
More information about the Python-list