Project organization and import

Alex Martelli aleax at
Tue Mar 6 16:37:46 CET 2007

Jorge Godoy <jgodoy at> wrote:
> > My favorite way of working: add a test (or a limited set of tests) for
> > the new or changed feature, run it, check that it fails, change the
> > code, rerun the test, check that the test now runs, rerun all tests to
> > see that nothing broke, add and run more tests to make sure the new code
> > is excellently covered, rinse, repeat.  Occasionally, to ensure the code
> > stays clean, stop to refactor, rerunning tests as I go.
> I believe this is a distinct case.  When we write tests we're worried with the
> system itself. 

Not sure I get what you mean; when I write tests, just as when I write
production code, I'm focused (not worried:-) about the application
functionality I'm supposed to deliver.  The language mostly "gets out of
my way" -- that's why I like Python, after all:-).

> When using the interactive interpreter we're worried with how
> to best use the language.  There might be some feature of the system related
> to that investigation, but there might be not.  For example: "what are the
> methods provided by this object?" or "which approach is faster for this loop?"

I do generally keep an interactive interpreter running in its own
window, and help and dir are probably the functions I call most often
there.  If I need to microbenchmark for speed, I use timeit (which I
find far handier to use from the commandline).  I wouldn't frame this as
"worried with how to best use the language" though; it's more akin to a
handy reference manual (I also keep a copy of the Nutshell handy for
exactly the same reason -- some things are best looked up on paper).

> I won't write a test case to test loop speed.  But I'd poke with the
> interpreter and if the environment gets a bit big to setup then I'd go to the
> text editor as I said. 

I don't really see "getting a bit big to setup" as the motivation for
writing automated, repeatable tests (including load-tests, if speed is
such a hot topic in your case); rather, the key issue is, will you ever
want to run this again?  For example, say you want to check the relative
speeds of approaches A and B -- if you do that in a way that's not
automated and repeatable (i.e., not by writing scripts), then you'll
have to repeat those manual operations exactly every time you refactor
your code, upgrade Python or your OS or some library, switch to another
system (HW or SW), etc, etc.  Even if it's only three or four steps, who
needs the aggravation?  Almost anything worth doing (in the realm of
testing, measuring and variously characterizing software, at least) is
worth automating, to avoid any need for repeated manual labor; that's
how you get real productivity, by doing ever less work yourself and
pushing ever more work down to your computer.


More information about the Python-list mailing list