does lack of type declarations make Python unsafe?

Donn Cave donn at
Mon Jun 16 06:15:15 CEST 2003

Quoth danb_83 at (Dan Bishop):
| beliavsky at wrote in message news:<3064b51d.0306151228.22c595e0 at>...
|> Thus, if I define a function correl(x,y) to compute the correlation of
|> two vectors, which makes sense to me only if x and y are 1-D arrays of
|> real numbers,
| But what kind of real numbers?  IEEE double-precision?  Or might you
| someday need a correl function that works with ints (e.g., to compute
| Spearman's correlation coefficient), or arbitrary-precision floats, or
| BCD numbers, or rational numbers, or dimensioned measurements?
| As long as your number classes have +, -, *, /, and __float__ (so
| math.sqrt works) defined correctly, you don't have to rewrite your
| correl code to support them.  THAT is the beauty of dynamic typing.

Or the beauty of static typing.  In a rigorously statically typed
language like Haskell, you'd write your function more or less the
same as you would in Python, but the compiler would infer from the
use of +, -, etc. that its parameters are of type Num, and you would
be expected to apply the function to instances of Num - any numeric
type.  Anything else is obviously an error, and your program won't
compile until it makes sense in that respect.

One would think from reading this thread that this would be good
for safety but hard to program for, but it's actually the opposite.
I'm told that type checking is practically irrelevant to safety
critical standards, because the testing needed to meet standards
like that makes type correctness redundant.  But the compiler cleans
up lots of simple errors when you're writing for more casual purposes,
and that saves time and possibly embarrassment.

	Donn Cave, donn at

More information about the Python-list mailing list