[Numpy-discussion] Data cube optimization for combination

Hanno Klemm klemm at phys.ethz.ch
Tue Mar 6 07:20:21 EST 2012


Hi,

this should work:

import numpy as np

ndim = 20

cube = np.random.rand(32,ndim, ndim)
result = np.zeros([ndim, ndim], np.float32)

def combine(cube, result):

     for ii in range(ndim):
         for jj in range(ndim):
             result[ii, jj] = np.sqrt((cube[:,ii, jj])).sum()
     return result

def combine2(cube, result):

     r2 = np.sqrt(cube)
     r3 = r2.sum(axis=0)
     return r3

r3 = combine2(cube,result)
r1 = combine(cube, result)

print np.allclose(r3, r1)

When I time it in ipython with ndim=20, I get:

In [8]: timeit combine2(cube, result)
1000 loops, best of 3: 246 us per loop

In [9]: timeit combine(cube, result)
100 loops, best of 3: 3.86 ms per loop



Best regards,
Hanno


Am 06.03.2012 um 13:00 schrieb Jose Miguel Ibáñez:

> Hello everyone,
>
> does anyone know of an efficient implementation (maybe using
> numpy.where statement) of the next code for data cube (3d array)
> combining ?
>
> import numpy as np
>
> def combine( )
>
>  cube = np.random.rand(32,2048,2048)
>  result = np.zeros([2048,2048], np.float32)
>
>   for ii in range(2048):
>       for jj in range(2048):
>            result[, ii, jj] = np.sqrt((cube[:,ii, jj])).sum()
>
>
> It takes long time to run, however,
>
>
>>> result = np.median(cube,0)
>
>
> only around one second ! where is the point ? any suggestions ?
>
>
>
> Thanks !
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>




More information about the NumPy-Discussion mailing list