josht at iname.com
Tue May 2 00:43:30 CEST 2000
>There are two main approaches to compile Python to machine code:
>A) Work through the bytecode as the interpreter does, compiling each
>bytecode instruction to the library function that the interpreter would
>call. For example, BINARY_ADD would become a PyObject_Add() call.
>JPython uses a similar technique to compile to Java bytecode. However,
>because all variables are completely polymorphic (i.e., nothing is known
>about their type), even the simplest operations end up going through the
>abstraction mechanism. So that BINARY_ADD, for example, might still have
>to go through and allocate a new integer object, deallocate the old ones,
>etc, even if a simple machine "ADD 1 TO REGISTER" instruction would work.
>The result is that the program is in machine code, but it still runs like
>it's in an interpreter. Cutting out the fetch-decode-dispatch sequence is
>really only the tip of the iceberg.
Would this type of compiler result in any kind of speed increase? Even
though it runs like it's in an interpreter, does the translation to pure
machine code increase speed?
Thanks for the replys, guys.
"Destined For Great Things -- but pacing myself."
- From a t-shirt.
E-Mail: josht at crosswinds.net
More information about the Python-list