PEP 3107 and stronger typing (note: probably a newbie question)

Paul Rubin http
Tue Jul 3 08:15:53 CEST 2007

aleax at (Alex Martelli) writes:
> I do have (some of:-) that experience, and am reasonably at ease in
> Haskell (except when it comes to coding monads, which I confess I still
> have trouble wrapping my head around).

I recently found the article

which had (for me anyway) a far more comprehensible explanation of
monads than anything I ever saw about Haskell programming.  Don't be
scared by the title--the article is very readable.  It's much clearer
than many Haskell tutorials I've seen, whose monad explanations simply
wouldn't fit in my mind, with weird analogies about robots handling
nuclear waste in containers moving around on conveyor belts, etc.  I
think right now I could write an explanation of monads understandable
by any well-versed Python programmer (it would be based on showing the
parallels between Python's list type and Haskell's List monad, then
moving onto sequencing with the Maybe monad, instead of trying to
start by explaining Haskell I/O), so maybe I'll try to do that

> Eckel's and Martin's well-known essays on why good testing can replace
> strict static type checking:
> <>
> <>

Those are pretty unpersuasive since they're based on Java and C++.
Eckel's example would have been much easier in Haskell, I think.

Did you look at Tim Sweeney's presentation "The Next Mainstream
Programming Language: A Game Developer's Perspective"?  I wonder what
you thought of it.  It's here:

  Powerpoint version:

  PDF version:

He advocates a mixed functional/procedural language with even fancier
static types than Haskell.

> Me, I'm always worried about significant code revision _unless I have
> good tests in place_.  Strict static typechecks catch only a small
> subset of frequent mistakes -- good tests catch a far higher quota.

It seems to me that the typecheck is sort of like a test that the
compiler provides for you automatically.  You still have to write
tests of your own, but not as many.  Also, Python-style unit tests
usually are written for fairly small pieces of code.  They usually
don't manage to cover every way that data can flow through a program.
E.g. your program might pass its test and run properly for years
before some weird piece of input data causes some regexp to not quite
work.  It then hands None to some function that expects a string, and
the program crashes with a runtime type error.  Static typechecking
makes sure that a function expecting a string can never ever receive

I also sometimes have trouble figuring out how to write useful tests,
though maybe there's some understanding that I haven't yet grokked.
However, a lot of my programs crawl the filesystem, retrieve things
over the web, etc.  It's hard to make any meaningful tests
self-contained.  Last week I needed to write a 20-line Python function
that used os.walk to find all filenames of a certain format in a
certain directory tree and bundle them up in a particular way.  The
main debugging hassle was properly understanding the output format of
os.walk.  A static type signature for os.walk, checked against my
program, would have taken care of that immediately.  Once I got the
function to work, I deployed it without writing permanent tests for
it.  An actual self-contained test would have required making some
subdirectories with the right layout before running the function.
Writing a test script that really did that would have taken 10x as
long as writing the function took, and such a complex test would have
needed its own tests (I'd have probably used something like dejagnu).

For functions that simply transform data, I've been enjoying using
doctest, but for functions like the above os.walk thing, I'm not sure
what to do.  I'd be happy to receive advice, of course.

> Typechecks do catch some mistakes "faster", but, in my experience on
> today's machines, that tiny difference is becoming insignificant,
> particularly when you consider that typechecks typically require
> whole-program analysis while, as Van Roy and Haridi point out, dynamic
> typing affords "totally open coding".

I don't know if this holds in Haskell, but at least in C, the compiler
could usually find several errors in one pass, while dynamic types
often mean running the program, seeing some function crash from
getting None instead of a string, figuring out and fixing the error,
running again, getting a similar crash in a different place, and
repeating all the above several times.

Lately I've been looking at a somewhat highbrow book on programming
language theory:

I don't understand that much of it, but the parts I can make any sense
of are giving me a better picture of what PL designers these days
think about.  It's really nothing like how it was in the 1970's.

More information about the Python-list mailing list