Unit testing data; What to test
Dinu Gherman
gherman at darwin.in-berlin.de
Wed Feb 21 05:10:19 EST 2001
"John J. Lee" wrote:
>
> Having decided testing was a Good Thing and that I ought to do it, I've
> started to write tests, using PyUnit.
>
> The first question is straightforward: do people have a standard, simple
> way of handling data for tests, or do you just tend to stick most of it in
> the test classes? KISS I suppose. But if the software is going to change
> a lot, isn't it a good idea to separate the tests from their input and
> expected result data? Of course with big data -- I have some mathematical
> functions that need to be checked for example -- you're obviously not
> going to dump it directly the test code: I'm wondering more about data of
> the order of tens of objects (objects in the Python sense).
>
> In fact, do unit tests often end up having to be rewritten as code is
> refactored? Presumably yes, given the (low) level that unit tests are
> written at.
>
> The second (third) one is vague: What / How should one test? Discuss.
>
> I realise these questions are not strictly python-specific, but (let's
> see, how do I justify this) it seems most of what is out there on the web
> & USENET is either inappropriately formal, large-scale and
> business-orientated for me (comp.software.testing and its FAQ, for
> example), or merely describes a testing framework. A few scattered XP
> articles are the only exceptions I've found.
>
> I'm sure there must be good and bad ways to test -- for example, I read
> somewhere (can't find it now) that you should aim to end up so that each
> bug generates, on (mode) average, one test failure, or at least a small
> number. The justification for this was that lots of tests failing as a
> result of a single bug are difficult to deal with. It seems to me that
> this goal is a) impossible to achieve and b) pointless, since if multiple
> test failures really are due to a single bug, they will all go away when
> you fix it, just as compile-time errors often do. No?
>
> John
I suggest reading some of the written material about XP
that you can find by searching your favourite online book
dealer for "extreme programming". I get five hits for the
series that was started with Kent Beck with Addison-Wesley,
two of which are not yet published... All of the first
three that I know are *thin* books and easy to read!
In general I'd say only that much: knowbody knows better
*what* to test than people writing the code (after the use
cases / user stories have been determined)! So, if there
is a lack of knowledge about what to test (I'm not sure this
is the case here, though, but this is the subject line ;-)
there must be some deeper issue on the common understanding
of what the system is expected to do.
As for organizing the tests: see the books I mentioned, but
don't expect a detailed process description. Everything that
works for you will be fine! If something won't: adapt, shake
and reiterate! ;-)
Regards,
Dinu
--
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or
not. Otherwise you don't enjoy your work, you don't work well,
and the project goes down the drain."
(Kent Beck, "Extreme Programming Explained")
More information about the Python-list
mailing list