# [Numpy-discussion] Comparing NumPy/IDL Performance

Nathaniel Smith njs at pobox.com
Mon Sep 26 11:37:34 EDT 2011

```On Mon, Sep 26, 2011 at 8:24 AM, Zachary Pincus <zachary.pincus at yale.edu> wrote:
> Test 3:
>    #Test 3 - Add 200000 scalar ints
>    nrep = 2000000 * scale_factor
>    for i in range(nrep):
>        a = i + 1
>
> well, python looping is slow... one doesn't do such loops in idiomatic code if the underlying intent can be re-cast into array operations in numpy.

Also, in this particular case, what you're mostly measuring is how
much time it takes to allocate a giant list of integers by calling
'range'. Using 'xrange' instead speeds things up by a factor of two:

def f():
nrep = 2000000
for i in range(nrep):
a = i + 1
def g():
nrep = 2000000
for i in xrange(nrep):
a = i + 1

In [8]: timeit f()
10 loops, best of 3: 138 ms per loop
In [9]: timeit g()
10 loops, best of 3: 72.1 ms per loop

Usually I don't worry about the difference between xrange and range --
it doesn't really matter for small loops or loops that are doing more
work inside each iteration -- and that's every loop I actually write
in practice :-). And if I really did need to write a loop like this
(lots of iterations with a small amount of work in each and speed is
critical) then I'd use cython. But, you might as well get in the habit
of using 'xrange'; it won't hurt and occasionally will help.

-- Nathaniel

```