Towards faster Python implementations - theory

Terry Reedy tjreedy at udel.edu
Thu May 10 13:18:34 EDT 2007


"sturlamolden" <sturlamolden at yahoo.no> wrote in message 
news:1178804728.527486.196400 at y80g2000hsf.googlegroups.com...
| Franz, CMUCL, SBCL and GCL teams made Lisp almost as fast as C. A
| dynamic language can be fast if the implementation is good.
|
| If you look at SBCL and GCL, no code is interpreted. It's all compiled
| on the fly to native machine code.

Unfortunately, native machine code depends on the machine, or at least the 
machine being emulated by the hardware.  Fortunately or not, the dominance 
of the x386 model makes this less of a problem.

| The compiler begins with some code
| and some input data, and compiles as much as it can. Then the RT
| executes the machine code, compiles again, etc. Often long stretches
| of code can be compiled without break, and tight performance critical
| loops are usually compiled only once. In addition to this, one needs
| an efficient system to cache compiled code, in order to do the
| compilation work only once. making a dynamic language fast is not
| rocket science.

This sound somewhat similar to Psyco.  If a code section is entered with 
different machine types, recompilation is needed.  One of the problems with 
Psyco is that its efficient caching of multiple versions of compiled code 
is space-hungry.

| We should have somthing like "GPython", a Python RT on top of a GCC
| backend, similar to what the GCL team did for Lisp. There is no
| obvious reason as to why Lisp should have better performance than
| Python.

In the 1980s, during the Lisp/AI boom, perhaps a billion dollars was put 
into making Lisp run faster.  Plus decades of work in Computer Science 
departments.  Other than that, I agree.  Now, Python is slowly making 
inroads into academia as a subject of research and theses, and the PyPy 
group got at least one large EU grant.

Terry Jan Reedy






More information about the Python-list mailing list