[SciPy-user] Re: Help on performance of signal.convolve

Yichun Wei yichunwe at usc.edu
Wed Jan 12 18:01:01 EST 2005


I think I need fft to do this. Also I found a thread on this lists 
discussion this in a 2-dimensional case:

http://www.scipy.net/pipermail/scipy-user/2004-May/002888.html



Yichun Wei wrote:
> Dear Experts,
> 
> Sorry if I was not concrete or even not correct last time I posted this
> for help.
> 
> I'd like to convolve a (64,64,41) kernel with a (64,64,1800)
> array with mode='valid' . What would be the fastest method in scipy?
> 
> Here I tried with signal.convolve and it takes >400 s to solve.
> a.shape is (64,64,41), b.shape is (64,64,1800)
> 
> res = signal.convolve (a, b, mode='valid')
> 
> it took around 420 s CPU time to solve on my P-IV 1.8G CPU. I have the
> file dumped from profile, if you want to have a look I can attach it.
> 'same' and 'full' never ends when I ran them. I am using the Enthought
> Python with scipy 0.3. Is this performance normal on a P-IV 1.8G CPU?
> 
>>>> p.sort_stats('cumulative').print_stats(10)
> 
> Wed Jan 12 11:31:08 2005    Profile_k_GetRespons_same
> 
>    1631 function calls (1623 primitive calls) in 420.407 CPU seconds
> 
>    Ordered by: cumulative time
>    List reduced from 175 to 10 due to restriction <10>
> 
>    ncalls  tottime  percall  cumtime  percall filename:lineno(function)
>         1    0.001    0.001  420.407  420.407 profile:0(res =
> k.GetResponse())
>         1    0.000    0.000  420.406  420.406 <string>:1(?)
>         1    0.000    0.000  420.406  420.406 
> F:\tmp\py\cte\kernel.py:173(GetResponse)
>         1  419.705  419.705  419.705  419.705
> C:\Python23\Lib\site-packages\scipy\signal\signaltools.py:79(convolve)
>       5/1    0.000    0.000    0.701    0.701
> C:\Python23\Lib\site-packages\scipy_base\ppimport.py:299(__getattr__)
>       5/1    0.033    0.007    0.701    0.701
> C:\Python23\Lib\site-packages\scipy_base\ppimport.py:252(_ppimport_importer) 
> 
>         1    0.091    0.091    0.699    0.699
> C:\Python23\Lib\site-packages\scipy\signal\__init__.py:5(?)
>         1    0.000    0.000    0.395    0.395
> C:\Python23\Lib\site-packages\scipy\signal\signaltools.py:4(?)
>         1    0.091    0.091    0.313    0.313
> C:\Python23\Lib\site-packages\scipy\stats\__init__.py:5(?)
>         1    0.019    0.019    0.196    0.196
> C:\Python23\Lib\site-packages\scipy\signal\bsplines.py:1(?)
> 
> 
> <pstats.Stats instance at 0x007E9DA0>
> 
> 
> I read some performance guide like the one by Prabhu at
> http://www.scipy.org/documentation/weave/weaveperformance.html. But
> since this is only a function call to sigtools._correlateND, I think it
> is already implemented in C++. If it is the case, I think it is not
> profitable to use blitz, swig or f2py.
> 
> Also, I find there is a fftpack.convolve, however I am not sure if it 
> works only on 1-d array, or if it is appropriate to use fft in this 
> convolution I will do. (I also find in numarray the convolution object 
> got an option to decide whether or not to use fft.)
> 
> Could you be kind enought to point out where the effort should be put to 
> improve the performance of such a convolution? Any hint will be greatly 
> appreciated!!
> 
> - yichun
> 
> 
> 




More information about the SciPy-User mailing list