"Strong typing vs. strong testing"
Pascal J. Bourguignon
pjb at informatimago.com
Fri Oct 1 11:56:24 CEST 2010
rustom <rustompmody at gmail.com> writes:
> Some points that seem to be missed (or Ive missed them?)
> 1. A dichotomy is being made between 'static' languages like C and
> 'dynamic' languages like python/lisp. This dichotomy was valid 30
> years ago, not today. In Haskell for example
> - static checking is stronger than in C/C++ -- its very hard if not
> impossible to core dump haskell except through memory exhaustion
> - dynamic-ness is almost that of python/lisp -- on can write
> significant haskell programs without type-declaring a single variable/
You're confunding type strongness with the requirement that the
programmer should declare the types, and with the time when the types
type strong static explicit Ada
type strong static implicit Haskell
type weak static explicit C
type weak static implicit ?
type strong dynamic explicit (*)
type strong dynamic implicit Common Lisp
type weak dynamic explicit Objective-C
(*) Usually languages provide explicit typing as an option, but can
deal with implicit typing, when they're dynamic.
There are also a few languages with no type checking, such as assembler
> Much more mainstream, C# is almost as 'managed' as dynamic languages
> and has efficiency comparable to C.
Nothing extraordinary here. Common Lisp is more efficient than C.
Actually, it's hard to find a language that has no compiler generating
faster code than C...
> 2. The dichotomy above misses a more pervasive dichotomy -- hardware
> vs software -- as real today as 30 years ago.
> To see this let us lift the discussion from that of *languages* C vs
> Python/Lisp to philosophies:
> -- C-philosophy: the purpose of type-checking is to maximize (runtime)
> -- Lisp-philosophy: the purpose of type-checking is zero-errors (aka
> seg-faults) via continuous checks at all levels.
> If one is honest (and not polemical :-) ) it would admitted that both
> sides are needed in different contexts.
> Now Dijkstra pointed (40 years ago) in Discipline of Programming that
> this unfortunate dilemma arises due to lack of hardware support. I am
> unable to reproduce the elegance and succinctness of his language but
> the argument is as follows:
> Let us say that for a typical profile of a computer we have for every
> one instruction of the pathological one typified by the maximum
> function, a trillion 'normal' instructions. This is what he calls a
> very-skew test -- an if-then-else that checks this would go the if-way
> way one trillion times for one else-way. It is natural for a
> programmer to feel the pinch of these trillion checks and (be inclined
> to) throw them away.
> If however the check was put into hardware there would be no such
> dilemma. If every arithmetic operation was always checked for overflow
> *by hardware* even languages committed to efficiency like C could trap
> on errors with no extra cost.
> Likewise Lisp/python-like languages could easily be made more
> The diff arises from the fact that software costs per use whereas
> hardware costs per installation -- a transistor, unlike an if, does
> not cost any more if its used once or a trillion times.
> In short the problem is not C vs Lisp/Python but architectures like
> Intel wherein:
> 1. an overflow bit harmlessly set by a compare operation is
> indistinguishable from one set by a signed arithmetic operation --
> almost certainly a problem
> 2. An into instruction (interrupt on overflow) must be inserted into
> the software stream rather than raised as a hardware interrupt.
Hence the use of virtual machine: when your machine doesn't do what you
want, you have to write your own.
When Intel will realize that 99% of its users are running VM, perhaps
they'll start to wonder what they're making wrong...
__Pascal Bourguignon__ http://www.informatimago.com/
More information about the Python-list