Forth like interpreter

Samuel A. Falvo II kc5tja at
Mon Mar 13 13:08:22 CET 2000

In article <38CC03A9.6461C7D6 at>, Christian Tismer wrote:
>The real "overhead" is the check on default (cmp eax, 140)
>and in the double lookup. The "mov dl" maps to an index
>into the address table.
>It may count for Forth, but for Python this is very fast.

For any language that uses threaded interpretation in this manner, this type
of overhead *ALWAYS* counts.  This is because for each and every
pseudo-instruction that is being executed, those error handling and indexing
instructions are always executed.  Switch-type threading is almost always
slower than threaded and/or subroutine threading for this very reason.

The nice thing about direct threading (and even for subroutine threading) is
that the compiler guarantees that there are no invalid "opcodes" (for lack
of better term) in them, so the need to perform error handling is eliminated.
This is why such threading techniques are often fastest.

NOW, here's the kicker.  Switch-type and token-threaded implementations
support pre-compiled executables.  This is important for Python, and this is
why I would not be suprised (or upset, for that matter) if Python didn't
adopt an alternative threading solution.  Python's precompiled files are a
valuable asset to Python, in my opinion.  It's a speed/convenience tradeoff
I'm willing to support for a language like Python.

What I'd like to know, though, is what kind of performance boost would one
(theoretically) get if Python switched from its current switch-threading to
token, direct threading?  And in what ways would it break existing software?

KC5TJA/6, DM13, QRP-L #1447
Samuel A. Falvo II
Oceanside, CA

More information about the Python-list mailing list