does lack of type declarations make Python unsafe?

beliavsky at aol.com beliavsky at aol.com
Sun Jun 15 22:28:06 CEST 2003


In Python, you don't declare the type of a variable, so AFAIK there is
no way for the interpreter to check that you are calling functions
with variables of the correct type.

Thus, if I define a function correl(x,y) to compute the correlation of
two vectors, which makes sense to me only if x and y are 1-D arrays of
real numbers, Python will not stop me from trying to compute the
correlation of two arrays of integers or strings. C++ and Fortran 95
compilers can spot such errors.

Calling functions with invalid arguments is one of the commonest
programming errors, and I think it is worthwhile to declare variables,
especially the dummy arguments of functions, explicitly to avoid such
errors, even if it takes a few more lines of code. I worry that
Python's convenience in writing small programs comes at the cost of
making bug-free large programs more difficult to write.

I have only been programming in Python for a few weeks -- am I right
to worry?
Are there techniques to ensure that functions are called with
appropriate arguments?

When I actually try to call correl(x,y) in the program

from stats import correl
from Numeric import array
a = array(["a","b"])
b = array(["c","d"])
print correl(a,b) # should be arrays of Floats, not strings

I get the result

Traceback (most recent call last):
  File "xcorr.py", line 5, in ?
    print correl(a,b)
  File "stats.py", line 88, in correl
    ax  = mean(x)
  File "stats.py", line 57, in mean
    return sum(x)/n
TypeError: unsupported operand type(s) for /: 'str' and 'int'

It's good that the program crashes rather than returning a bad result,
but a C++ or Fortran 95 compiler would flag the bad arguments during
compilation, which I think is still better. Also, in a large script,
the bad call to correl(x,y) could come after much CPU time had
elapsed, potentially wasting a lot of programmer's time.




More information about the Python-list mailing list