Python 2.0b1 List comprehensions are slow

Alex Martelli aleaxit at
Mon Sep 11 14:44:38 CEST 2000

"Dan Schmidt" <dfan at> wrote in message
news:wkzolfqfsm.fsf at
> "Alex Martelli" <aleaxit at> writes:
> | Performance appears to be roughly like (best-out-of-3 for each):
> | >>>'junk=withlambda(10000)')
> |          10003 function calls in 0.886 CPU seconds
> | >>>'junk=withcompre(10000)')
> |          3 function calls in 0.063 CPU seconds
> I don't know a lot about this stuff, but I think that the profiler
> isn't adequately compensating for the extra work that it has to do to
> track function calls, so the non-comprehension versions get an unfair

I guess you must be right; i.e., I must have failed the calibration
of the profiler somewhere.  _Without_ the profiler, 36 and 60 msec
are indeed reasonably-repeatable elapsed-time measurements for
withlambda and withcompre, for example.  I'll look again into the
calibration issue, which I thought I understood, but clearly didn't.

Meanwhile, at least I can make sense of the concerns being expressed!

Heisenber must be smiling somewhere, about measurements altering
the thing being measured...!


More information about the Python-list mailing list