Profiling, sum-comprehension vs reduce

Steven D'Aprano steve at REMOVE-THIS-cybersource.com.au
Sat Sep 13 18:52:31 CEST 2008


On Sat, 13 Sep 2008 01:06:22 -0700, cnb wrote:

> This must be because of implementation right? Shouldn't reduce be faster
> since it iterates once over the list? doesnt sum first construct the
> list then sum it?

What makes you think that?

Given the speed of sum(), it sure doesn't look like it's generating a 
full list before summing. Why would it?


> reduce with named function:  37.9864357062 
> reduce with nested, named function:  39.4710288598 
> reduce with lambda:  39.2463927678
> sum comprehension:  25.9530121845

If you want to see reduce really shine, time it with a C-based function 
rather than one written in pure Python:

>>> Timer('reduce(add, xrange(10000))', 
... 'from operator import add').repeat(number=10000)
[19.724750995635986, 19.410486936569214, 19.614511013031006]
>>>
>>> Timer('reduce(add, xrange(10000))', 
... 'def add(x, y): return x+y').repeat(number=10000)
[45.210143089294434, 44.814558982849121, 46.906874895095825]


You probably won't see much (if any) benefit for small lists, so make 
sure your test is on a significantly-sized input list.

Of course, sum() is even faster than reduce:

>>> Timer('sum(xrange(10000))').repeat(number=10000)
[9.814924955368042, 8.7169640064239502, 9.5062401294708252]



-- 
Steven



More information about the Python-list mailing list