Unit testing data; What to test
aleaxit at yahoo.com
Thu Mar 1 17:29:48 CET 2001
"John J. Lee" <phrxy at csv.warwick.ac.uk> wrote in message
news:Pine.SOL.4.30.0102251735030.6152-100000 at mimosa.csv.warwick.ac.uk...
> Sorry about the formatting etc. of this post -- I had to cut and paste
> from Google.
My sympathy. I'll even forgive you're not mentioning that
I'm the guy whose comments you're responding to:-).
> > "John J. Lee" <phrxy at csv.warwick.ac.uk> wrote in message
> > news:Pine.SOL.4.30.0102160309090.7729-100000 at mimosa.csv.warwick.ac.uk...
> > If you keep the (stimuli, expected_responses) sets separate from the
> > module being tested, you gain much the same benefits (and pay much
> > the same costs) as by keeping documentation separate from code for
> > other kinds of docs (think of tests as a form of executable docs...!).
> > Keeping some tests/docs together with the code simplifies distribution
> > issues: anybody who has your code also has this minimal set of tests
> > (and other docs).
> Well, that's not a problem, is it (unless the tests really are huge)?
> They can be in the distribution without being mixed in with the code.
They can be, sure -- just like other documentation can be in your
distribution without being mixed with your code. Again, I claim
the parallel is very strict. Maybe people are *supposed* to only
ever distribute your code in the way you packaged it up -- but,
what happens if, e.g., some automated dependency finder picks up
your wonderful bleeppa.py, and stashes it into allyouneed.zip
*without* the accompanying bleeppa.doc and bleeppa_tests.py files,
for example? Answer: whoever ends up rummaging some time later
in the unpacked allyouneed.zip WILL have your bleeppa.py but
none of the other files that you originally packaged with it.
So, if bleeppa.py itself contains some minimal/essential subset
of its own docs (as docstrings or comments) and unit-tests, you
are covering up for exactly such problems -- making your code
more usable when it happens to start going around without other
files that _should_ always accompany it.
> > However, in some cases, the total amount of tests
> > and other docs can easily swap the amount of code -- and in this case
> > keeping them together can easily be thought of as "too costly". The
> > code can become practically unreadable if there are an order of
> > more lines of docs (including tests) than lines of actual source in a
> Agreed. My tests are separate from the code -- it hadn't occurred to me
> to keep them in the same file. It was the separation (or not) of test
> data and test code I was wondering about, though.
I guess I tend not to think of that because my typical 'test _code_'
IS separated from my typical 'test _data_' anyway -- Tim Peters'
doctest.py being the former:-). So, I don't think of merging test
code and test data, any more than I think of distributing, say, Word
together with my .doc files:-).
Or, if you look at the docstrings which doctest.py runs as "code"
rather than "data", then I guess the issue I have not faced (in
Python) is that of _separating_ the test-data from the code.
When I _do_ face it in other languages (in some database-centered
components: the DB on which the tests are to run is unconscionably
BIG, if you think of it in any textual form in which it might be
feasible to merge it with the relatively small amount of code that
needs it), the testcode/testdata separation happens to be well
near inevitable. The code carries with it the SQL code (as data)
that it sends to the DBMS and the results it expects, but not the
DB itself -- so, it's not really feasible to test the component
(run its standard unit tests, I mean) if you only have its executable
part without the accompanying files (since the starting DB needed
for the tests is just such an accompanying file).
> > My favourite approach (and I do wish I was more systematical in
> > practising what I preach:-) is to try and strike a balance: keep with
> > the code (docstrings, comments, special-purpose test functions) a
> > reasonable minimal amount of tests (& other docs) -- indicatively,
> > roughly the same order of magnitude as the source code itself;
> > but _also_ have a separate set of docs (& tests) for more extensive
> > needs.
> This is all good advice, but it doesn't actually answer my question. ;-)
> Anyway, I think I will try having some tests in the code, sounds like a
> great idea considering the tests-as-docs aspect.
Serendipity, pure serendipity:-). I misread your question, or you
miswrite it (or 50-50 -- comes to the same thing in the end:-), yet
you still gain something useful from the resulting discussion...:-).
> > you release an application that's about 1,000 SLOC, it does not
> > really matter much if your process and approach are jumbled; at
> Let me assure you it _is_ possible to achieve an unmaintainable mess in
> ~1000 SLOC. I have personally achieved this in, let's see (C-x C-f
> munge.pl RET) ... 832 SLOC. This is one of many things that Perl makes
Oh, I'm sure I have beaten that, a LONG time ago back when I
wrote in APL -- but what I'm saying it's not really the _process_
that controls that, for a small-enough deliverable. Good taste,
experience, and some common sense on the coder's part may suffice,
with just-about-nonexistent-process, to keep things under control
_at this level of code-size_.
> easy (this is completely unfair to Perl of course -- what really makes it
> easy is starting out thinking 'this is going to be a 100 line script' and
> then extending by cut and paste -- I've learned my lesson, honest!).
> > 10,000 SLOC, it's a problem; at 100,000, an utter nightmare, so
> > you HAVE to be precise in defining your process then (or, call it
> Yeah, of course, I understand that. But there is still a place for lots
> of testing for smaller efforts, minus the heavyweight formal process.
Absolutely YES. Testing ain't a bad thing even if done informally!-)
> > 100, 1000, 10,000 FP -- but it's really about SLOC more than it
> > is about FP, which is why higher-level languages help so much).
> > Differently from what XP specifies, I think tests should be in two
> > categories (and, similarly, so should deliverables be, rather than
> > the just-one-deliverable which so characterizes XP -- that is a
> > tad too X for my conservative self, even though I buy into 70+%
> > of XP's tenets!).
> Which two categories? External and internal? How is this different from
> the XP unit test versus acceptance tests division? Is your point just
> that the external / internal division applies on more levels than just
> final user / everything else -- which is what you seem to be talking about
Basically, yes, and I apologize for muddled expression. The point is
that most often I build components that no 'final' user will ever
notice (unless something goes badly wrong:-) -- call it 'middleware'
or whatever. But not-so-final users may need to get at parts of
them -- not just future maintainers/extenders, and current and future
re-users, but (at least when scripting is possible:-) current and
future _scripters_ (some 'power-users', 3rd party system integrators,
customer-support/application-engineers, ...). What's "internal" and
what's "external" varies depending on the target audience.
> here's a snip explaining what XP people mean by acceptance tests for
> anybody that hasn't read about it:
[snip, but the whole site IS a recommended read:-)]
More information about the Python-list