optional static typing for Python

Paddy paddy3118 at googlemail.com
Mon Jan 28 02:03:23 CET 2008


On Jan 27, 10:19 pm, "Russ P." <Russ.Paie... at gmail.com> wrote:
> A while back I came across a tentative proposal from way back in 2000
> for optional static typing in Python:
>
> http://www.python.org/~guido/static-typing
>
> Two motivations were given:
>
>     -- faster code
>     -- better compile-time error detection
>
> I'd like to suggest a third, which could help extend Python into the
> safety-critical domain:
>
>     -- facilitates automated source-code analysis
>
> There has been much heated debate in the past about whether Python is
> appropriate for safety-critical software. Some argue that, with
> thorough testing, Python code can be as reliable as code in any
> language. Well, maybe. But then, a famous computer scientist once
> remarked that,
>
> "Program testing can be used to show the presence of bugs, but never
> to show their absence!" --Edsger Dijkstra
>
> The next step beyond extensive testing is automated, "static" analysis
> of source-code ("static" in the sense of analyzing it without actually
> running it). For example, Spark Ada is a subset of Ada with
> programming by contract, and in some cases it can formally prove the
> correctness of a program by static analysis.
>
> Then there is Java Pathfinder (http://javapathfinder.sourceforge.net),
> an "explicit state software model checker." The developers of JPF
> wanted
> to use it on a prototype safety-critical application that I wrote in
> Python, but JPF only works on Java code. We considered somehow using
> Jython and Jythonc, but neither did the trick. So they ended up having
> someone manually convert my Python code to Java! (The problem is that
> my code was still in flux, and the Java and Python versions have now
> diverged.)
>
> In any case, optional static typing in Python would help tremendously
> here. The hardest part of automated conversion of Python to a
> statically typed language is the problem of type inference. If the
> types are explicitly declared, that problem obviously goes away.
> Explicit typing would also greatly facilitate the development of a
> "Python Pathfinder," so the conversion would perhaps not even be
> necessary in the first place.
>
> Note also that, while "static" type checking would be ideal,
> "explicit" typing would be a major step in the right direction and
> would probably be much easier to implement. That is, provide a syntax
> to explicitly declare types, then just check them at run time. A
> relatively simple pre-processor could be implemented to convert the
> explicit type declarations into "isinstance" checks or some such
> thing. (A pre-processor command-line argument could be provided to
> disable the type checks for more efficient production runs if
> desired.)
>
> I noticed that Guido has expressed further interest in static typing
> three or four years ago on his blog. Does anyone know the current
> status of this project? Thanks.

If static typing is optional then a program written in a dynamic
language that passes such an automated static analysis of source code
would have to be a simple program written in a simplistic way, and
also in a static style.

Having used such formal tools on hardware designs that are expressed
using statically typed languages such as Verilog and VHDL, we don't
have the problem of throwing away run time typing, but we do get other
capacity problems with formal proofs that mean only parts of a design
can be formally prooved, or we can proof that an assertion holds only
as far as the engine has resources to expand the assertion.
We tend to find a lot of bugs in near complete designs by the random
generation of test cases and the automatic checking of results. In
effect, two (or more) programs are created by different people and
usually in different languages. One is written in say Verilog and can
be used to create a chip, another is written by the Verification group
in a 'higher level language' (in terms of chip testing), a tunable
randomized test generator then generates plausible test streams that
the Verification model validates. The test stream is applied to the
Verilog model and the Verilogs responses checked against the
Verification models output and any discrepancy flagged. Coverage
metrics (line coverage , statement coverage, expression coverage ...),
are gathered to indicate how well the design/program is being
exercised.

I would rather advocate such random test generation methods as being
more appropriate for testing software in safety critical systems when
the programming language is dynamic.

- Paddy.
P.S. I think that random generation has also been used to test
compilers by automatically generating correct source for compilation.



More information about the Python-list mailing list