does lack of type declarations make Python unsafe?

David Abrahams david.abrahams at rcn.com
Sun Jun 29 11:30:43 EDT 2003


[Reposting; Apparently this never made it to the list]

Alex Martelli <aleax at aleax.it> writes:

> <posted & mailed>
>
> David Abrahams wrote:
>    ...
>>> Sometimes X might perhaps be of the form "a is of type B", but
>>> that's really a very rare and specific case.  Much more often it
>>> will be "container c is non-empty", "sequence d is sorted", "either
>>> x<y or pred(a[z]) for some x>=z>=y", and so on, and so forth.
>> 
>> Those assertions are *chock full* of type information:
>
> Rather, they IMPLY what you choose to call "type information" 

I'm not making this notion up; it is well-accepted in some circles.
Let me quote from Benjamin C. Pierce, _Types and Programming
Languages_:

    As with many terms shared by large communities it is difficult to
    define "type system" in a way that covers its informal usage by
    programming language designers and implementors but is still
    specific enought to have any bite.  One plausible definition is
    this:

       A type system is a tractable syntactic method for proving the
       absence of certain program behaviors by classifying phrases
       according to the kinds of values they compute.

You'll probably argue with most of that definition, but that's what
I'm talking about, and that's why "container" is type information:
having knowledge that c is a container classifies some of the phrases
involving c.

> (which has little to do with the typesystem of, say, C++ or Java)

Which are, respectively, a mess and uninteresting.

> as well as other information (which isn't sensibly mappable to
> "type" in any nontrivial way).  E.g.:
>
>>       the type of c is a container
>
> Rather, "c is a container".  Choosing to express "c is a container"
> as "the type of c is the type of a container" is just the kind of
> useless, redundant boilerplate that negatively impacts productivity.

LOL!

I wrote "c is a container" first, but went back and changed it
because I thought you would criticize the way I phrased it.

> And in neither C++ nor Java can you express "c is a container",
> either directly or indirectly, except in comments -- the
> type-systems of either language (the two most-widespread languages
> today that use compiletime typing) just don't match the concepts
> that you're trying to smuggle in as "type information" .  

Actually as you suggest below, you can do it in C++ using library
constructs, but it is not really part of the language's design.

> In _Haskell_ you might collect the characteristics that define a
> type's containerhood as an appropriate typeclass (*NOT* type!); in
> Java you're simply out of luck; in C++ I guess you (you personally,
> as a template-wizard, but not 99%+ of C++'s users;-) might possibly
> define a template that "by convention" (not in a language-supported
> way as in Haskell) lets you assert containerhood -- "informally",
> albeit reasonably effectively (better than a comment...).

I think it comes a lot closer to being fully-featured than that; in
fact you can remove functions from the overload set or disable
template specializations based on constraints.  But like all template
metaprogramming tricks in C++, at its core it's a hack.  Mostly we
rely on concept definitions.

> In Python one generally identifies (just as informally) a container as
> "an object which has a length" (using "length", perhaps a suboptimal
> choice of wording, to mean "number of items currently contained") and
> simultaneously express both 'c is a container' and 'that container is
> not empty' by
>
>     assert len(c)

That understanding of "containerness" is, AFAICT, not universally held
among Pythonistas by any means.  Good thing too, probably: it looks
like there is no such thing as an empty container:

     >>> assert len([])
     Traceback ...

In any case, this is *way* more informal than the equivalent concept
definition would be for C++.  We would want to say not only something
about the type of the return value (e.g. an unsigned integral type),
but also its relationship to other invariants of the type being
tested.

> This will be checked at runtime rather than at compile-time -- but
> the concept that there's something utterly precious in being able to
> check "function len can be meaningfully called with c as an argument"
> prematurely with respect to "the result of the call will be non-0" is
> pretty bogus.  Python checks both at once and raises TypeError or
> AssertionError if either half of the assertion is invalid, period.

Why bother with the assertion anyway?  This appears to be LBYL.

> There may be worth in making this notion of "protocol" more formal
> (desperately trying to avoid nouns which have been hijacked for many
> vaguely related meanings in the past, such as "type", "interface",
> "category" -- C++ uses "concept" similarly, I believe, in the generic
> parts of the language and library).  

Yes.

> I think there is -- PEP 246 being a (rather defect-laden) early
> attempt, PyProtocols a rather nicer and newer one (with enhancements
> wrt PEP 246 but broadly consonant with it).  But IMHO that worth is
> connected to the concept of *adapting* an object to a protocol, far
> more than on insurances about an object "already conforming" to a
> protocol without adaptation (the concept of conformance can even be
> modeled as a limit case of that of adaptation, though that's
> admittedly silly hairsplitting;-).

I think you're very focused on checking and not giving enough
attention to the following questions:

    "what do I need to know to write a new component that works (and
     will continue to work) with this library"?

    "I have to fix a bug in this function; how can I tell what it
    expects (and has advertised as an expectation) of the code that is
    calling it, so that I know my modifications don't break clients?"

>> Type declarations don't have to identify concrete types; they can
>> identify concepts, constraints, and relationships.
>
> In some theoretical world, perhaps.  In practice, again, that's
> bogus.  

That's a pretty strong statement.  IIUC Haskell supports exactly the
sort of thing I mean with type classes.  A quick googling finds this
among many other examples: http://tinyurl.com/eptl.  I'm pretty sure
that OCaml has something like that too, but I'm less confident of
that.

> You can't meaningfully have a type declaration for "an
> odd integer not divisible by 3" in C++ nor in Java.  

Are you kidding?  I sure can do that in C++.

> The vague theoretical possibility that a be-all, end-all future
> language would let you capture "constraints and relationships" this
> way is being used strictly as a soft-soap for justifying the use of
> languages whose actual typesystems are enormously different, and
> strongly focused on implementation issues.  

Maybe by some; not by me.

> I'd be quite willing to fight against the hypothetical "perfect
> static typesystem" as "nearly as useless" if a working prototype of
> it was presented, but it's really a waste of time.  Let's talk about
> the reality of C++ and Java instead, shall we?  

No thanks.  Why limit ourselves to the theoretically-unsound and the
bland?

> The theoretical problem with types embedding all sorts of
> "constraints and relationships" is that as soon as you try to
> meaningfully operate on them your compiler's possibilities to ensure
> compiletime type safety tend to disappear (which is why languages
> with a strong theoretical basis in type theory eschew that route).

They may "tend" to disappear, but it isn't a foregone conclusion.
See Haskell for example, or http://sourceforge.net/projects/felix/,
another interesting language.

> Given type ZT as "odd integer not divisible by 3", in theory you
> might infer that the sum of ZT's can't possibly be a ZT, the product
> must be, but what about, say, the division-with-truncation?  And
> who's going to draw all the deductions about the various arithmetic
> operation restricted to ZT down from Z, etc, so that when the
> forbidden prime factors in ZT are 2 and 3 you can draw certain
> deductions but when they're 3 and 5 instead you can't any more (the
> sum of two of THOSE might still be valid, etc, etc)?

I agree that those things are hard to deduce.  I also think they are
probably not worth the trouble to try to handle at compile-time.

>>> Type declarations would have extraordinary "explanatory power" if
>>> and only if "a is of type B" was extraordinarily more important than
>>> the other kinds of assertions, and it just isn't -- even though, by
>>> squinting just right, you may end up seeing it that way by a sort of
>>> "Stockholm Syndrome" applied to the constraints your compiler forces
>>> upon you.
>> 
>> Suggesting that I've grown to love my shackles is a little bit
>> insulting.  I have done significant programming in Python where I
>> didn't have static typing; I've gotten over my initial reactions to
>> the lack of static checks and grown comfortable with the language.
>> Purely dynamic typing works fine for a while.  I have seen real
>> problems develop in my code that static type checking would have
>> prevented.
>
> And (e.g.) Robert Martin has not.  Now, one can explain this in
> many ways -- perhaps Uncle Bob and you are using different approaches
> to developing your programs, and his (test-driven development, these
> days) avoids the "real problems" that develop in your code; 

I *always* do TDD, even in C++.

> or perhaps he's unable to see what you're able to see so clearly.

Maybe.

> Any hypothesis with explanatory power about this cannot fail to be
> offensive to either you or him, if one wants to take it as
> insulting.  

I don't want to.  Peoples' experiences differ and that's all I need
to explain the fact that we may have come to different conclusions.

> Since my experience matches Uncle Bob's quite closely, and my
> reflections and musings on the matter end up supporting this side of
> the argument just as well as my practice does, I think I can be
> justified in expressing my opinion on the subject.  

Sure.

> Human beings are extremely good at rationalizing their experiences
> (and their cognitive dissonances): I have experienced this
> personally, too.  If it's insulting to say that somebody's
> exhibiting a typically human trait about it, then I can give a long
> list of cases which would be insulting against myself.  

It's only a little bit insulting.  I have given significant thought
and reflection to this issue.  

I *could* say that the heat and length of your response is an
indication that my posting has threatened something which you hold as
a religious belief without justification.  I was not going to do that
because I know you to be thoughtful and experienced, and I think that
would insult your intelligence.

> For example, as a fervid Pascal user I was extremely keen on
> range-types, declaring all over the place variables that were
> "integer from -3 to 17 included" and the like, and considered this a
> great boon of the typesystem.  It took quite a while to realize that
> these were not in fact statically checkable constraints except in
> the most trivial of cases, and that moving to (e.g.) C or C++ and
> losing the ability to express this "exquisite informational power"
> didn't give me any real practical problem and in fact saved me the
> time wasted in trying to pinpoint each variable's range in the first
> place.  A few "assert(x>=-3)" *where it MATTERED* worked far, far
> better than some silly type declaration way up there at the start
> about "x will never be less than -3" -- the documentation was more
> accessible to the reader, the occurrence of checking more explicit
> and thus obvious.  Etc, etc.

I guess I was never so naive: giving up Pascal ranges didn't bother
me for more than about a second.

>> It's not an illusion that static types help (a lot) with certain
>> things.  The type information is at least half of what you've written
>> in each of those assertions.  I use runtime assertions, too, though
>> often I use type invariants to constrain the state of things --
>> because it makes reasoning about my code *much* easier.  An
>> ultra-simple case: it's great to be able to use an unsigned type and
>> not have to think about asserting x >= 0 everywhere.
>
> I would be ecstatically glad to see a nice fight, no holds barred,
> between a proponent of this latest sub-thesis and a designer of
> statically typed languages which do NOT support the concept of
> "unsigned type".  In Java, "unsigned" means "without a signature"
> (thus, a risky applet;-).  But I think that the issue may be taken
> as a good example of the above, wider thesis.  *IF* having an "x
> that's always >= 0" *WAS* indeed such a precious concept, "makes
> reasoning about your code *much* easier", 

You're taking my statement the wrong way.  Class invariants plus
static typing makes reasoning about code much easier.  Having
unsigned types is only a small contributor to that (as I said, an
ultra-simple case).

> etc etc, then *why* has nobody ever widened that concept to
> *FLOATING POINT* types for x?  Where's the "unsigned float" in the
> next C++ release...?

Probably, among other things, because that constraint is much less
useful for floating types.  I think you'll see assert(f >= 0) much
less often than unsigned types are used.

> In practice, as I'm sure you know, unsigned types in C/C++ are
> tricky indeed (that's why Java removed them -- they deemed them
> too tricky to use reliably by ordinary programmers).  

Because they're low-level and don't include underflow checks.

> E.g. cfr http://sandbox.mc.edu/~bennet/cs220/c_code/uns_c.html and
> the like.  They were born from *IMPLEMENTATION* considerations,
> historically, which only apply to integral types.  

Yep.

> The "makes reasoning easier" justification is a post-facto
> rationalization, belied by the glaring absence of the same facility
> where implementation would not gain, as in floating-point numbers.

Maybe.  I just picked the simplest example at hand where type
information is useful for reasoning.  There are lots of better ones.

>>> Types are about implementation
>> 
>> No, they're about a relationship between an interface and semantics.
>> Most people leave out the semantics part when thinking about them,
>> though.
>
> In practice, the typesystem of languages such as C++ and Java is
> mostly about implementation.

No argument.  One has to work a little too hard to make them about
what types should be about.

> "The semantics" is simply left out of these languages' typesystems
> -- relegated to comments (which might just as well be there in
> Python or any other language and aren't going to be compiler-checked
> anyway).  The famous exception to this is Eiffel with its notion of
> contract -- but notice that contracts are checked, if at all, *AT
> RUNTIME* -- thus, claiming they're part of COMPILE-TIME notions of
> typing is a falsification.

I think you misunderstand.  I don't mean to enforce semantics.  I
trust people to make types whose semantics match the ones documented
to be associated with the type constraints.  In other words, if you
say, in the language:

     type X is a Container

and "Container" has some defined semantics (e.g. if len(c) > 0,
c[len(c)-1] doesn't raise an exception), I don't expect the compiler
to check that.  That relationship of semantics to interface is
captured in one place, when it is asserted that X is a Container.

> The notion of "type" is kept fuzzy (deliberately or not) so that
> when arguing about typing one can freely swing between incompatible
> aspects, claiming compile-time checkability on one side while
> handwaving about semantics (contracts) that are run-time checks,

No, I'm not even suggesting the use of run-time checks for these
things, except possibly in the implementation of X itself.

> claiming "modern notions of types" while in fact talking about
> languages such as C++ and Java which support mostly implementation
> oriented type issues.

See, not only do you mistake what I'm talking about, but you persist
in treating me like I am double-dealing.  I am not "in fact talking
about C++" (and certainly not Java, horrors!)

>>> and one should "program to an interface, not to an implementation"
>>> -- therefore, "a is of type B" is rarely what one SHOULD be focusing
>>> on.
>> 
>> In a modern type system, "a is of type B" mostly expresses an
>> interface for a and says nothing about implementation per se.  It does
>
> Is C++'s type system modern?  

No!  It's fairly recent, but that's not the same thing ;-)

> How does "a is an int" or "a is unsigned" ``say nothing about
> implementation per se''??!!

I'm talking about "a is Integral" (see http://tinyurl.com/eptl again,
the diagram in particular).

> "a is of some type that supports interface X" is very often a
> RUNTIME notion (expressed in Java by a "(X)a" cast, in C++ by
> dynamic_cast),  not necessarily a statically checkable one.  

You could do that, but I don't see why you're mentioning it.  I would
*never* use dynamic_cast for that in C++.  That can be completely
represented with a statically-checkable combination of runtime and
static polymorphism.

> The part that's sure to be statically checked (because it lets the
> compiler generate faster code!-) is the *actual* typing -- the
> implementation-related part thereof.
>
>> say something about the effects of using that interface, though that
>> part is harder to formalize.
>
> Eiffel formalizes it -- with RUNTIME checks, of course:-).  Languages
> that don't formalize it in any way cannot sensibly claim to have that
> notion in their typesystems, whether the latter are claimed to be
> "modern" or not.

A language which does no static checking makes it a lot harder to
maintain even an informal relationship.

>> However, with a nod to "practicality beats purity:"
>> 
>>      People don't usually think in these abstract terms about most
>>      of their code, and rigorously documenting code in terms of
>>      interface requirements is really difficult, so most people
>>      never do it.
>
> This could also be expressed as: rigorously documenting code (in
> just about any terms) has pretty low productivity-returns, compared
> to less rigorous and formal documentation that is runtime-checkable
> (contracts in Eiffel, assertions in Java/C++/Python).  Quite
> sensibly, people focus on programming practices that DO have good
> productivity returns.

Most people don't write libraries.  I do.  Libraries without rigorous
documentation of the requirements on their parameters suck.  Writing
sucky software is no fun.

>>      It's a poor investment anyway because *most* (not all) Python
>>      code is never used generically: common interfaces for polymorphic
>>      behavior are generally captured in base classes and a great deal
>>      of code is just operating on concrete types anyway.
>
> "Common interfaces for polymorphic behavior" are *almost never*
> "captured in base classes" -- "file-like objects" are a typical
> example of this, or, if you want a more focused one, consider
> "iterable objects".

I know about those cases, which is why I said "not all"; well if it
were really *almost never* inheritance wouldn't be so important in
Python.  In all the Python frameworks I've seen where one person has
defined two or more classes which have the same interface, they share
a common base class.  When new people define components to fit the
same polymorphic slot, they tend to inherit from that base as well.

Furthermore, I've seen *lots* of places where code is documented as
accepting a "sequence" when in fact if you pass a tuple it breaks.

>>      The result is that there are usually no expression of interface
>>      requirements at all in a function's interface/docs, where in the
>>      *vast* majority of cases a simple (non-generic) type declaration
>>      would've done the trick.  [Without the expression of interface
>>      requirements, the possibility to use the function generically is
>>      lost, for all intents and purposes]
>
> It seems that the quality of documentation for the functions I've
> been using in Python is substantially higher than that for those
> which YOU have been using.  The lack of *formal*, rigorous docs is
> generally not a serious problem; functions that e.g. take a filelike
> object argument ARE generally kind enough to mention that fact (as
> the filelike interface is so fat, it IS typically underdocumented
> what fraction of it is actually in use

Which makes "filelike" almost meaningless to someone who wants to know
what they need to supply to the function... unless they precisely
implement the whole file interface, a small change to the function's
implementation or even just the execution paths inside the function
can break the client's code.

>  -- but I don't see any simple nongeneric type declaration that
> would help fix this at all).

No, for that you need a generic type declaration.  Of course it's easy
to *seem* to defeat the argument that lots of code could use simple
non-generic static typing by throwing an example which requires
generics at it.

>> So, while I buy "program to an interface" in theory, in practice it
>> is only appropriate in a small fraction of code.
>
> On the contrary, I think it's the need to fix specific types that
> might be appropriate only very rarely.

I don't think you *need* to do it often, I just think that the work of
defining generic concepts means that it's often more expedient to use
a static type.

>>> Of course, some languages blur the important distinction between a
>>> type and a typeclass (or, a class and an interface, in Java terms
>>> -- C++ just doesn't distinguish them by different concepts

C++ doesn't even have typeclass in the language (unions don't count).

>> Nor does most of the type theory I've seen.
>
> Does Haskell count?  typeclasses vs types?

Sure, that's a language definition, not type theory.  But Haskell is
well-grounded.

>>> so, if you think in C++, _seeing_ the crucial distinction may be
>>> hard;-).
>> 
>> I know what typeclasses and variants are all about.
>
> I'm sure you do, as a general issue.  But the language you're using
> to think and work about a specific problem still colors how easy or
> hard is it to conceptualize in a certain way wrt another.

I think you're presuming *way* too much about how I think of this
issue.  I'm not a "one-language kind of guy".

>>> my procedure receiving argument x
>>>
>>>     assert "x satisfies an interface that provides a method Foo which
>>>             is callable without arguments"
>>>
>>>     x.Foo()
>>>
>>> the ``assert'' (which might just as well be spelled "x satisfies
>>> interface Fooable", or, in languages unable to distinguish "being
>>> of a type" from "satisfying an interface", "x points to a Fooable")
>>> is ridiculously redundant, the worse sort of boilerplate.
>> 
>> Only if you think that only syntax (and not semantics) count.  It's
>> not just important that you can "Foo()" x, but that Fooing it means
>> what you think it does.
>
> But, to repeat, "the language-supplied concept of "interface" is too
> weak" to ensure this, except perhaps by contracts (runtime checks) in
> Eiffel.  So, the static type checking does *NOT* help at all here.

I disagree.  It limits the problem substantially.  Fulfilling the
right semantics can be asserted (and tested) in one place.

> What the compiler can check for you is ONLY that x provides a
> method Foo callable without arguments -- the "signature" part of
> things, which Python checks at runtime instead.  Whether Foo writes
> useful information to disk or sends a letter of insults to your
> cousin is way beyond the compiler's ability to determine;-).

Totally agreed.

>>> Many, _many_ type declarations are just like that, particularly if
>>> one follows a nice programming style of many short
>>> functions/methods.  At least in C++ you may often express the
>>> equivalent of
>>>
>>> "just call x.Foo()!"
>>>
>>> as
>>>
>>> template <type T>
>>> void myprocedure(T& x)
>>> {
>>>     x.Foo();
>>> }
>> 
>> In most cases it's evil to do this without a rigorous concept
>> definition (type constraint) for T in the documentation.  Pretty much
>> all principled template code (other than special cases like the lambda
>> library which are really just for forwarding syntax) does this, and
>> it's generally acknowledged as a weakness in C++ that there's no way
>> to express the type constraints in code.

Well, as I mentioned earlier, there is a way, but it's more
cumbersome than it should be and ultimately a template hack.

>>> where you're basically having to spend a substantial amount of
>>> "semantics-free boilerplate" to tell the compiler and the reader
>>> "x is of some type T"
>> 
>> Where are you claiming the expression of the type of x is in the code
>> above?  I don't see it.
>
> the "(T& x)" part says "x is a reference to the type which we'll call T
> here", i.e. "x is of some type T".

T is not a type in the code above (your "type T" [sic]
notwithstanding).  It's just a name for a type ;-)

> As my main thesis is that the constraints that really matter can
> hardly ever be expressed as statically checkable type constraints,
> the "generally acknowledged weakness of C++" is (if a weakness at
> all) common to all alternatives.  E.g., the Java alternative where a
> Fooable interface is defined and method myprocedure receives a
> "Fooable x" argument is just as semantics-free -- comments and/or
> other forms of doc will be around, sure, but they're not
> "compile-time statically checkable" anyway, and may just as well be
> around in any language.

Even in Java that localizes all opportunities for a mis-correspondence
between Fooable and x to the classes derived from Fooable.

>>> (surprise, surprise; I'm sure this has huge explanatory power,
>>> doesn't it -- otherwise the assumption would have been that x was of
>>> no type at all...?)
>> 
>> This kind of sneering only makes me doubt the strength of your
>> argument even more.  I know you're a smart guy; I ask you to treat my
>> position with the same respect with which I treat yours.
>
> If you're familiar at all with my writing style, you know I always
> jest -- generally "with a straight face" a la Buster Keaton -- when
> I come upon jestworthy sub-issues (and not always only then).  If you
> can only debate in a sombre tone, then I can but suggest we drop this
> (not a big loss, to be sure -- these debates have been held a zillion
> times and have never convinced anybody of anything -- and besides, I'm
> about to leave for a month's worth of business trips -- pypy sprint,
> Europython, OSCON, ... -- so this debate can't continue anyway).

I love a nice jestful debate.  Maybe your whole post is jestful and I
failed to recognize it.  It certainly seems to have a derisive tone
which makes it hard to see the love.  Where's the love, people?
Anyway, if you tell me you didn't mean it that way I'll take your word
for it.

>>> while letting them both shrewdly infer that type T, whatever it
>>> might be, had better provide a method Foo that is callable without
>>> arguments (e.g. just the same as the Python case, natch).
>> 
>> Only if you consider the implementation of myprocedure to be its
>> documentation.
>
> Documentation is comments, docstrings, and other venues yet, which
> are not statically compile-time checkable and are pretty irrelevant
> to the debate about the worth of such checks, quite obviously.  The
> "meat", the part that doesn't "go out of date", is what can be
> "checked" at compile- or run-time, and is language-relevant.

My point was that you can only "shrewdly infer that type T, whatever
it might be, had better provide a method Foo that is callable without
arguments" from the code you posted by looking at the implementation.

>>> You do get the "error diagnostics 2 seconds earlier" (while compiling
>>> in preparation to running unit-tests, rather than while actually
>>> running the unit-tests) if and when you somewhere erroneously call
>>> myprocedure with an argument that *doesn't* provide the method Foo
>>> with the required signature.  But, how can it surprise you if Robert
>>> Martin claims (and you've quoted me quoting him as if I was the
>>> original source of the assertion, in earlier posts)
>> 
>> Hey, sorry, I just let Gnus do its job.  If the quote attributions
>> were messed up then someone messed them up before me.
>
> Nope, you (not Gnus) just cut out the attribution of the quote, which
> was on a separate line from the snippet you picked out of the quote.

Well, then sorry again.  I couldn't even find the posting you were
referring to.  FWIW, if it was a case like this:

          Alex wrote:
          >> something Robert wrote
          >
          > something Alex Wrote

I don't normally consider that to be a problem.  Is it?

>>> that this just isn't an issue...?
>> 
>> It doesn't surprise me in the least that some people in the Python
>> community claim that their way is unambiguously superior.  It's been
>
> Is "Uncle Bob" Robert Martin "in the Python community"?  

I assumed so.

> His books all use C++ and Java, I believe, as does the great mass of
> his articles, and I believe the journal for which he was the editor
> was titled "C++ Report", not "Python Report", wasn't it?

I'm ignorant in this matter.

> So, I surmise that (probably subconsciously) you're "relegating"
> Robert Martin to the role of a "person in the Python community" just
> so you can AVOID "being surprised" at his claims (which are about
> dynamic typing in general -- he mentions Ruby and I think Smalltalk
> as well as Python) -- perhaps for the same reason you cut out the
> little detail that I was quoting him...?-)

Ugh, gimme a break.  It's not a claim I'm surprised by; I've seen it
a lot.

>> going on for years.  I wanted to believe that, too.  My experience
>> contradicts that idea, unfortunately.
>
> Have you given TDD a chance?  

I always do TDD.

>> Comprehensive test suites can't always run in a few seconds (the same
>> applies to compilations, but I digress).  In a lot of the work I've
>
> Say 3 minutes and 2 minutes, then -- as long as the ratio stays
> the same the unit of measure doesn't matter:-)

But it doesn't.

>> done, testing takes substantially longer, unavoidably.  A great deal
>> of this work is exactly the sort of thing I like to use Python for, in
>> fact (but not because of the lack of type declarations).  If
>> compilation is reasonably fast and I have been reasonably
>> conscientious about my type invariants, though, I *can* detect many
>> errors with a static type system.
>
> But compilation need not be fast, as you just "digressed"
> yourself;-).

Not sure what point you're making.

>> But more importantly, I can come back to my code months later and
>> still figure out what's going on, or work with someone else's code
>> without losing my way.  Isn't that why we're all using Python instead
>> of Perl?
>
> Having the information that "x is an int" stated out like that
> rather than (say) implicitly where it matters is hardly crucial
> to "figuring out what's going on".  

ints are hardly ever a problem.  A parameter called "node" might be.

> And having the compiler statically test that fact, rather than
> testing it by a small core of unit-tests that can also check out
> several other crucial aspects not expressible in static-typing
> terms, is even less important.
>
>
>>> I do, at some level, want a language where I CAN (*not* MUST) make
>>> assertions about what I know to be true at certain points:
>>> 1. to help the reader in a way that won't go out of date (the assert
>>>    statement does that pretty well in most cases)
>>> 2. to get the compiler to to extra checks & debugging for me (ditto)
>>> 3. to let the compiler in optimizing mode deduce/infer whatever it
>>>    wants from the assertions and optimize accordingly (and assert is
>>>    no use here, at least as currently present in C, C++, Python)
>> 
>> Those are all the same things I want, and for the same reasons.  What
>> are we arguing about again?
>
> A. about CAN vs MUST -- I'd rather not have these possibilities at
>    all, than be FORCED to use them even where I judge their impact
>    on my productivity would be negative (forcing me to waste my
>    time, and the code reader's attention, on reams of boilerplate)

No, I don't want MUST.  You seem to misunderstand almost everything
I've been saying.

> B. about the importance of checks occurring at compile-time vs run-time,
>    which I think is miniscule and NOT worth distorting the language
>    in any way [note that my point 2 would be perfectly well satisfied
>    if what the compiler did in most cases was inserting error-checking
>    code to be executed at runtime, cfr again Eiffel contracts]

That may be true.  I recently added runtime function parameter type
checking to a small interpreter I work with, purely for the
expressive power it has.  Of course, it's better than a comment
because it doesn't go out-of-date without causing errors.  

On the other hand, when code has to be efficient you don't want to pay
for the runtime check, and some code still needs to be compiled to be
fast enough.

> C. about whether "x belongs to type A" is a sensible way to express
>    most important assertions for purposes of 1/2/3 -- I claim it isn't,
>    you claim it is -- partly based on subtle confusions about "belong
>    to type" MEANS (in type-theory vs programming-practice e.g. in C++)

Nope, you underestimate me.

> as well as debating-style (sombre vs jestful)

No, just friendly/respectful vs. derisive/vituperative.

> Robert Martin's role as a member of the "Python community",

I won't argue with you about that because I don't know anything about it.

> whether it's reasonable at all to cut out a quote's attribution, 

I don't think it's reasonable; I thought the attribution had to be
previously-missing.

> and sundry minor tangential points.

Probably those.




More information about the Python-list mailing list