Performance of list comprehensions vs. map

Skip Montanaro skip at pobox.com
Wed Sep 5 21:41:47 EDT 2001


    Chris> Skip Montanaro wrote:
    >> map() is a single function that doesn't return control to the
    >> interpreter until it's work is done.  List comprehensions are
    >> essentially just syntactic sugar for the equivalent for loop.

    Chris> This is exactly my question: why is it implimented this way?

    Chris> Because it was easy: A fine answer

    Chris> or

    Chris> Because it is important for hte design that it work exactly that
    Chris> the same way: which I don't understand at all.

It wasn't important for the design to work exactly like for loops, but given
Python's semantics it is hard to see how to have done it some other way.
Also, it was certainly the easiest route.  

    >> In theory, an optimizer could identify and replace some list
    >> comprehensions with calls to map,

    Chris> This is what I have in mind.

    >> but they wouldn't be quite equivalent.  In the examples above, there
    >> is no guarantee that the definition of the str function doesn't
    >> change during the execution of either the for loop or the list
    >> comprehension (think multiple threads of execution ;-).  The call to
    >> map, on the other hand, binds str once before map is called.

    Chris> Again, is this a design decision? that the expression in a list
    Chris> comprehension shuld be able to be changes mid-comprehension? In
    Chris> another thread to boot? This sounds like a the worst of confusing
    Chris> and unpredictable behaviour to me!!!

No, this is Python's principle of least astonishment at work.  Python's
language is dynamic all over the place.  Placing arbitrary restrictions on
those semantics because they happen to occur between "[" and "]" would have
been wrong.

    Chris> If it's OK from a design standpoint to require that the function
    Chris> in a map is bound once, it seems pretty reasonable to have the
    Chris> same requirement for a list comprehension. In fact, it seems like
    Chris> a darn good idea! The fact that the current implimentation
    Chris> doesn't enforce it is a pretty poor design justification...or am
    Chris> I missing something?

map() is like any other function written in C.  It does its thing, then
returns.  It would be impossible for map() to check back to see if the name
"str" was rebound during its execution.  All it knows is that it is passed a
callable object.  It has no idea what that object was bound to in the
calling scope.  It calls it once for each element of its various argument
lists.  That would be true if map() was written in Python as well.

    >> But there is no guarantee that the list you are operating on isn't
    >> modified by another thread during execution.

    Chris> but there should be !!

Why here and not elsewhere in the language?  I'm afraid you're tilting at
windmills.
 
    >> It's not a matter of lack of interest as much as a desire to keep the
    >> semantics of Python and lack of time.

    Chris> My point is that I imagine most sequences used in Python are, in
    Chris> fact, homogenous, but there is no way to know this, and it could
    Chris> make a huge difference in performance. Look at NumPy. I simply
    Chris> could not use Python without NumPy.

You're more than welcome to write a PEP.  One approach might be to propose
that the array object be "brought into the fold" as a builtin object and
modifications be made to the virtual machine so that homogeneity can be
assumed when operating on arrays.

    >> Here are some efforts I'm aware of.  Note that there are lots of ways
    >> performance can be addressed, not just by constraining types.

    Chris> Certainly. And when any kind of type constraining is suggested,
    Chris> what a lot of people say is that what we REALLY need is an
    Chris> all-powerful type infering engine...talk about time to impliment
    Chris> issues! I think one problem Python develpment does suffer from is
    Chris> that nothing gets done that isn't reasonably doable with a fairly
    Chris> small amount of work, but some things are simply too big to get
    Chris> done: like a type infering engine.

I don't think you need any sophisticated type inferencing engine.  That's
one way to do things, but not the only way.

    >> * Psyco (Armin Rego) - a run-time specializing virtual machine that
    >>   sees what sorts of types are input to a function and compiles a
    >>   type- or value-specific version of that function on-the-fly.  I
    >>   believe Armin is looking at some JIT code generators in addition to
    >>   or instead of another virtual machine.

    Chris> wouldn't homogenouos sequences be a big help for this? 

Yes, they could be, but it wouldn't be necessary to get some useful
speedups.  The prototype I saw compiled a specialized version of a function
based on its input types.  For instance, you might have:

    def f(a,b):
        return (a+b)**2

Psyco (as I understand it) would compile a version of f that assumed that a
and b were ints, assuming it saw at runtime that f was being called with two
ints.  After that, it might redirect the execution to the special version of
f if a two-int call was made, but revert to the old version if not.

It can go even further though, at least in theory.  Suppose Psyco observed
that f was often called with a == 1 and b == 4.  In that case it might build
a specialized version of f that simply returns 25.

Psyco could also presumably take advantage of homogeneity.  It would just be
a "simple matter of programming".
 
    >> * Static typing SIG (http://www.python.org/sigs/types-sig/) - I'm not
    >> familiar with this SIGs work, so you'll have to browse the archives.

    Chris> This is a pretty quiet SIG these days. I know a while ago Guido
    Chris> said that there would probably be some kin dof static typing in
    Chris> Py3k, but mostly for compile time type checking, not
    Chris> performance. I hanv't heard much since then, and it looks to be
    Chris> further off. Personally, I think a little static type
    Chris> declarations could help Py2C and the like a lot.

Sure, but then it wouldn't be Python. ;-)

Of course the presence of types in the language wouldn't preclude using that
information to improve performance.
 
    >> * Python2C (Bill Tutt, Greg Stein) - A "module compiler" that generates
    >> a C version of a Python module.  http://www.mudlib.org/~rassilon/p2c/

    Chris> I've been keeping my eye on this one, but it seems to be sleeping
    Chris> at the moment... someone please correct me if I'm wrong.
    Chris> Homogenouos sequences would be very handy here as well.

Yes, I believe it has been fairly stable for awhile.  I doubt that Bill or
Greg would turn away patches that improve it. ;-)

    >> * Various peephole optimizers - I wrote one a few years ago.  There's
    >>   also Michael Hudson's bytecodehacks project (on Sourceforge).  The
    >>   xfreeze bytecode freezer (part of the standard distribution?) also
    >>   does some peepholing.

    Chris> Hmm, I have no idea what peepholing is.. I'll have to check that
    Chris> out when I get the time.

Basically, you identify patterns in the instruction sequence and replace
them with more efficient instruction sequences.  Here's a paper I wrote for
SPAM-7:

    http://musi-cal.mojam.com/~skip/python/spam7/optimizer.html

I updated as many of the links in the paper as I could do easily.  Some are
still broken though.

-- 
Skip Montanaro (skip at pobox.com)
http://www.mojam.com/
http://www.musi-cal.com/




More information about the Python-list mailing list