???? Hardware-accellerated python ????

Samuel A. Falvo II kc5tja at garnet.armored.net
Mon Aug 14 22:24:48 EDT 2000


In article <87u2cv7xy8.fsf at galactica.it>, Francesco Bochicchio wrote:
>line of Python's interpreter code ), the most performed 
>operation should be the lookup in the object (or module) 
>dictionary.

What we need to do here is actually strap a profiler to the Python execution
core as we have it today.  Using the output of the profiler, we will be able
to best determine what, if anything, needs to be optimized.

Just a guess, but I feel that the use of Polymorphic Inline Caching would be
the single biggest improvement to Python's execution speed.  Statistically
speaking, method calls to a particular interface only go to specific
classes, so by relying on statistics built at run-time, it should be
possible to reduce the latency between calling a method, and actually
starting the method's execution.

I recall there being some work in changing the Python bytecode from a stack
oriented code to a more RISC-like, three-operand code at one point.  Now I'm
not going to get into the virtues of stack versus non-stack code (William
Tanksley and I had our little discussion about this a few days ago).  But I
will say that the *design* of such bytecode has a direct influence on the
applicability of PIC.

However, in all cases, these are software enhancements, and really don't
require any hardware acceleration.  And even after an algorithmic change is
made which speeds up an operation X, *never* underestimate the power of
hand-written assembly language.  This, of course, limits the product to a
specific platform.  That's OK, though, because a hardware accelerator device
also limits the usefulness of an implementation to a specific platform.  And
besides, more "generic" versions of the language can still be distributed.

-- 
KC5TJA/6, DM13, QRP-L #1447 | Official Channel Saint, *Team Amiga*
Samuel A. Falvo II	    |
Oceanside, CA		    |



More information about the Python-list mailing list