Is there a "Large Scale Python Software Design" ?
aleaxit at yahoo.com
Tue Oct 19 21:55:20 CEST 2004
Dave Brueck <dave at pythonapocrypha.com> wrote:
> Andreas Kostyrka wrote:
> > On Tue, Oct 19, 2004 at 07:16:01AM -0700, Jonathan Ellis wrote:
> >>Testing is good; preventing entire classes of errors from ever
> >>happening at all is better, particularly when you get large. Avoiding
> >>connectedness helps, but that's not always possible.
> > What classes of errors are completely avoided by "static typing" as
> > implemented by C++ (Java)?
C++'s casting power makes this a bit moot -- I have seen generally-good
developers (not quite comfy with C++, from a mostly-Fortran then a
little C background) mangle poor innocent rvalues (and even lvalues,
BION, with ample supplies of & and * to help) with such overpowering
hits of reinterpret_cast<> that I'm still queasy to think of it years
later. Java is mercifully a bit less powerful, but of course _its_
casts are generally runtime-checked. So, when one sees:
WhatAWonderfulWord w = (WhatAWonderfulWord) v;
one _IS_ admittedly inclined to think that the "class of error being
completely avoided" is "erroneous omission of a cast that plays no
useful role at all and is going to be checked only at runtime anyway".
However, there _are_ tiny but undeniable advantages to static typing:
1. some typos are caught at compiletime, rather than 2 seconds later by
unit tests -- 2 seconds ain't much, but it ain't 0 either;
2. simple-minded tools have an easier time offering such editing
services as "auto-completion", which may save a little typing;
3. simple-minded compilers have an easier time producing halfway
and the like. None deal with "classes of errors completely avoided"
unless one thinks of unittests as an optional add-on and of compilers as
a mandatory must-have, which is wrong -- the point Robert Martin makes
excellently in his artima article about the wonders of dynamic typing of
a bit more than a year ago (dynamic typing is wonderful _with_ unit
testing, but then unit testing is an absolute must anyway, to
> I'm curious as well, because from what I've seen, the classes of errors
> "caught" are (1) a subset of the higher-level (e.g. algorithmic and
> corner-case) errors caught by good testing anyway,
> (2) much more common in code written by
> lazy/underexperienced developers who are already considered a liability,
No, I think you're wrong here. Typos are just as frequent for just
about all classes of coders, lazy or eager, experienced or not -- the
eager experienced ones often use faster typing (nothing to do with
> and (3)
> caused in part by complexities introduced by the language itself*.
Yes, a fair cop. E.g., a typo in one of those redundant mentions of a
type or interface, seen above, is an error introduced only because I'm
required to type the GD thing twice over (though autocompletion may save
me some keystrokes;-).
> More modern/advanced static type systems that let you actually get into
> the semantics of the program (as opposed to just deciding which predefined
> type bucket your data fits in) may help, but IMO the jury's still out on
> them (partly due to complexity, and partly due to _when_ in the
> development process they must be defined - perhaps that's the root problem
> of some static type systems - they make you declare intent and semantics
> when you know the _least_ about them! Consider the parallels to available
> knowledge in compile-time versus run-time optimizations).
If you mean typesystems such as Haskell's or ML's, allowing extended
inference (and, in Haskell's case, the wonder of typeclasses), I think
you're being a bit unfair here. You can refactor your types and
typeclasses just as much as any other part of your code, so the "when
they must be defined" seems a bit of a red herring to me (unless you
have in mind other more advanced typesystems yet, in which case I'd like
some URL to read up on them -- TIA).
I think we agree at 95% to 99%, btw, I admit I'm just picking nits...
More information about the Python-list