Hardware take on software testing.

Paddy McCarthy paddy3118 at netscape.net
Fri Jun 6 21:21:13 EDT 2003


Peter Hansen <peter at engcorp.com> wrote in message news:<3EE0CF2B.57F474E4 at engcorp.com>...
> Paddy McCarthy wrote:
> > 
> > If software were tested like hardware...
>  [snip]
> > I ask because I value the view of Pythonistas and if you look at the
> > electronic/Software products you buy today, a lot go out the door with
> > bugs. The software industry has relied on the fact that it doesn't
> > cost them much to make another release but if you look at the
> > proliferation of updates out there it doesn't seem to be working.
> > Customers should expect more.
> 
> Actually, the conditions under which software and hardware are developed
> are quite different in some ways.  Those differences can make it quite
> inappropriate to treat the two the same.  New approaches that are being
> explored for software development, which would likely be fundamentally 
> infeasible for developing hardware, can be collectively labelled "agile
> processes".  At the heart of one of these processes, Extreme Programming,
> is a new approach to design, testing, and coding, called Test-Driven
> Development (TDD).
> 
> With the TDD approach, tests are written first (actually, you start with
> only one test), then executed to prove that they fail ("red light"), then 
> just enough code is written to make the test pass.  Once the test passes 
> ("green light"), the code is refactored to make sure it is clean and 
> proper, the tests are re-run to ensure nothing has broken, and work 
> proceeds on implementing the next test.
> 
<<SNIP>>
> 
> So to answer the question that you didn't actually get around to asking
> explicitly: I don't think testing software like hardware would be a good
> idea, but I also don't think there's a need as we seem to have developed
> other approaches which are sufficiently good that they could well eliminate
> the situation where "a lot go out the door with bugs".  Time will tell,
> but those of us applying these new techniques are so far generally 
> satisfied, or even ecstatic, with the results.
> 
> -Peter

Yeh, that question :-)
It seems that the discussion has started anyway so I'll continue...

On TDD when do you know you are done?
In the Hardware development process we graph the number of bugs found
over time and end up with an S curve, we also set coverage targets
(100 percent statement coverage for executable statements is the
norm), and rather like the TDD approach of software, some teams have
dedicated Verification engineers who derive a verification spec from
the design spec and write tests for the design to satisfy this,
(independantly).
OK, there is great presure to pass off the design on-time, so there is
flexibility in deciding how much testing is enough, but at least there
are charts and figures to guide us.

If TDD uses no random directed generation, then don't you tend to test
strictly the assumed behaviour?

Cheers, Pad.




More information about the Python-list mailing list