[Numpy-discussion] concatenate a tuple of lists 10x slower in numpy 1.0.1

Martin Spacek numpy at mspacek.mm.st
Fri Jan 19 23:28:54 EST 2007


Hello,

I just upgraded from numpy 1.0b5 to 1.0.1, and I noticed that a part of
my code that was using concatenate() was suddenly far slower. I
downgraded to 1.0, and the slowdown disappeared. Here's the code
and the profiler results for 1.0 and 1.0.1:

>>> import numpy as np
>>> np.version.version
'1.0'
>>> a = []
>>> for i in range(10):
...     a.append([np.random.rand(100000)]) # make nested list of arrays
...
>>> import profile
>>> profile.run('np.concatenate(tuple(a))') # concatenate tuple of lists
                                             # gives a 10 x 100000 array

          4 function calls in 0.046 CPU seconds

    Ordered by: standard name

    ncalls  tottime  percall  cumtime  percall filename:lineno(function)
         1    0.045    0.045    0.045    0.045 :0(concatenate)
         1    0.000    0.000    0.000    0.000 :0(setprofile)
         1    0.001    0.001    0.046    0.046 <string>:1(?)
         1    0.000    0.000    0.046    0.046
profile:0(np.concatenate(tuple(a)))
         0    0.000             0.000          profile:0(profiler)


>>> import numpy as np
>>> np.version.version
'1.0.1'
>>> a = []
>>> for i in range(10):
...     a.append([np.random.rand(100000)])
...
>>> import profile
>>> profile.run('np.concatenate(tuple(a))')

          4 function calls in 0.532 CPU seconds

    Ordered by: standard name

    ncalls  tottime  percall  cumtime  percall filename:lineno(function)
         1    0.531    0.531    0.531    0.531 :0(concatenate)
         1    0.000    0.000    0.000    0.000 :0(setprofile)
         1    0.001    0.001    0.532    0.532 <string>:1(?)
         1    0.000    0.000    0.532    0.532
profile:0(np.concatenate(tuple(a)))
         0    0.000             0.000          profile:0(profiler)

Going from numpy 1.0 to 1.0.1, there's a slowdown of over 10x. In
retrospect, I'm doing this in a weird way. If I get rid of the tuple of
lists, replace it with a flat list and use reshape instead, it's much
faster and gives me the same 10 x 100000 resulting array:

>>> b = []
>>> for i in range(10):
...     b.append(np.random.rand(100000))
...
>>> profile.run('np.concatenate(b).reshape(10, 100000)')
          5 function calls in 0.023 CPU seconds

    Ordered by: standard name

    ncalls  tottime  percall  cumtime  percall filename:lineno(function)
         1    0.021    0.021    0.021    0.021 :0(concatenate)
         1    0.000    0.000    0.000    0.000 :0(reshape)
         1    0.000    0.000    0.000    0.000 :0(setprofile)
         1    0.001    0.001    0.022    0.022 <string>:1(?)
         1    0.000    0.000    0.023    0.023
profile:0(np.concatenate(b).reshape(10, 100000))
         0    0.000             0.000          profile:0(profiler)

The reshape method is equally fast for both 1.0 and 1.0.1. Still, I
thought it prudent to bring up the slowdown with the tuple of lists
method. Is this issue known?

I ran these tests in Python 2.4.4 in a Windows console. I use the win32
py24 binaries.

Cheers,

Martin





More information about the NumPy-Discussion mailing list