Python compilers?

Jacek Generowicz jacek.generowicz at cern.ch
Mon May 24 09:01:00 EDT 2004


Paul Rubin <http://phr.cx@NOSPAM.invalid> writes:

> Carl Banks <imbosol at aerojockey.invalid> writes:
> > > The example above kills any attempt to turn a.bar() into a static
> > > procedure call.
> > 
> > Of course it does--but it's one method.  A compiler, if it's good,
> > would only make the optization on methods named "bar", and it could
> > probably pare the number of possible classes it could happen to down
> > to only a few.
> 
> How could it possibly know?  The reassignment of a.bar could happen
> anytime, anywhere in the code.  Maybe even in an eval.
> 
> > I mean you could have a Turing nightmare on your hands, with all kinds
> > of crazy setattrs and execs and stuff, in both Python and Lisp, and
> > then there's not much a compiler could do but emit totally general
> > code.  I assume Lisp compilers do this sometimes.
> 
> Lisp compilers might have to do that sometimes, but Python compilers
> would have to do it ALL the time.  Psyco took one way out, basically
> generating code at runtime and caching it for specific operand types,
> but the result is considerable code expansion compared to precompilation.

I see how seasoned Lispers can get pretty tired of these sorts of
arguments. Please bear in mind that Lisp has been the playground for a
lot of very clever people whose aim was to solve difficult problems
efficiently ... and they've been at it for about 3 decades before
Python was even thought of. Instead of claiming that Python is
something revolutionarily new in the area of dynamicity (it isn't) and
that compiling it is impossible or futile (it isn't) have a look at
what the Lispers learned in their study of the subject over the four
decades that they have been at it. (You should only do this, of
course, if you are interested in how Python might benefit from being
compiled; if you don't care, then at least don't hurl unfounded
"opinions" about how it is impossible because Python is so amazingly
dynamic.)

Of course certain things are easier to compile to efficient machine
code than others. How does the existence of the problem spots take
away from the usefulness of compiling the easier parts? It's possible
to have significant gains. Bigger gains come with harder work. Bigger
gains can be had in some areas than in others. Certain areas are more
likely to be speed critical than others to a larger proportion of
users (number crunching, for instance). Different compiler
implementors make different choices regarding which areas of the
language they target with hard compiler optimizations.

Imagine that I would like to be able to write effecient numeric code
in pure Python, for example ... I really don't care that any instance
methods that I add will not be dispatched within a single cycle, do I?



More information about the Python-list mailing list