Data-driven testing

Alex Martelli aleax at
Fri Apr 25 17:59:46 CEST 2003

Peter Hansen wrote:

> Alex Martelli wrote:
>> Peter Hansen wrote:
>> > throwaway than the ones I "know" are not.  If I'm wrong... no big deal:
>> > I rewrite as a real script with proper tests.  Since the script was
>> > only twenty or thirty lines of code (any more and it could not possibly
>> > be throwaway, right?) it isn't really a big deal.
>> Wrong.  "Throwaway" means "I [think I] am going to need to run this
>> only once".  Whether it's 30 or 60 lines has nothing to do with it.
> This is getting us nowhere.

Quite possibly.  You seem to be interested in the ability to predict
"throw-away-ness" of a few specific individuals, which, to me, seems
quite a marginal issue.  I'm more interested in the general issue.

> I'm asserting that I have actually seen "throwaway" scripts, even
> predicted their imminent development, and subsequently have actually
> thrown them away.
> I'm not asserting I do this with 100% accuracy.  I'm asserting that
> I very infrequently do decide that a script will be throwaway, and
> that in those rare cases I often turn out to be right, and often
> enough that it seems worth continuing to make such predictions in
> order to save myself some time.  These I believe are facts about *me*.

I compliment you on your accuracy (assuming your perception of it
is accurate).  However, even if you, and/or Aahz, are in fact
exceptional individuals who are indeed able to make such specific
predictions reliably, I would still caution all readers against
drawing any conclusions from this, unless they have the good fortune
of working at your side and being able to benefit from your advice
about this specific matter.

I've met quite a few people who made specific claims about their
ability to predict short-term stock market movements, horse-race
results, whether a script is going to be thrown away, what parts of
a program never yet profiled are its bottlenecks, whether it will
rain next weekend, and other notoriously hard-to-predict events.

I have never been able to actually verify that any of these claims
could be substantiated.  I may suspect on general grounds that some
people DO have such unusual predictive abilities, but until and
unless I'm able to rely on specific advice by somebody proven to
be such an individual, I consider it wiser and more prudent to act
as if such predictions were as unreliable as they notoriously are
held to be in popular opinion.

So, I don't play the stock-market nor bet on horse racing, etc.
And in fields in which I think I can teach and advise others, I
suggest to others that they behave similarly.  In programming, for
example, I suggest NOT relying on one's intuition about where the
bottlenecks will be, nor on that about what's throw-away code
and what's going to be used again.

> You're asserting that you have never accurately predicted this, or
> at least not accurately enough to make prediction worthwhile, or
> at least that even when you predict correctly it you discount the
> prediction and proceed on the assumption that you are wrong.  (Or
> something like that... I'm unsure it's actually relevant to my point.)

If your point is strictly about YOUR predictive abilities, then any
assertion about MINE, or those of the general population, cannot be
relevant to it.

> Neither of us can assert anything about the other's real situation,
> nor presumably about Aahz' (though I admit I did... maybe I was wrong).
> I can't see any point in continuing the discussion, can you?

As long as the general readership has followed our assertions and
can draw its own conclusions, sure.  I highly recommend to that
readership to keep a healthy skepticism about such predictions and
always consider the possible costs of being wrong vs the costs of
"covering" for possibly wrong predictions.

"rm" has clearly miniscule benefits in most cases, and
the potential cost is obvious -- if you're wrong, and next week
need that same functionality again, you're going to have to write
and possibly debug it again.  Leaving your hard-disk strewn with
dusty old code, however, is quite sub-optimal too -- any single
given script matters little, but cumulatively they may make things
untidy indeed.  Keeping your "probably throwaway scripts" in their
own folder[s] is a good move -- and so is writing SOME docs about
what it IS that they do, even just a couple lines worth of docstring.

And that is just the start.  The *mindset* of "I _think_ this will
be run once - but just possibly it will be run again" leads to
some documenting, testing, and a modicum of care about structure,
which collectively make it *more* likely that some part of the
script, if not the whole, DOES then later prove worth reusing.


More information about the Python-list mailing list