![](https://secure.gravatar.com/avatar/500a1e2e864ae08f386afde2cd21add0.jpg?s=120&d=mm&r=g)
Hi, I'm trying to compute the the convolution if s 2D array, and I see that there are several ways in SciPy to do that. As the original data C and the kernel R are about the same size in my case, I'd profit from an FFT-based implementation, which I see right now is given by scipy.signal.fftconvolve( C, R, mode='same' ) and also scipy.stsci.convolve.convolve2d( data=C, kernel=R, mode='constant', cval=0.0, fft=1 ) The second method is much faster than the first, and as far as I can see it would spit out the the same results. Now, when the kernel is actually larger than the data, the resulting array would have the shape of the kernel. Is there any way to restrict the computations to the size of the data? At first, I thought that was what "mode='same'"" is for. I tried cutting the extra data off of the resulting array, but I'm not quite sure which is the part that I would like to get rid of. Any hints here? Cheers, Nico
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Sat, Mar 20, 2010 at 7:51 AM, Nico Schlömer <nico.schloemer@gmail.com> wrote:
Hi,
I'm trying to compute the the convolution if s 2D array, and I see that there are several ways in SciPy to do that. As the original data C and the kernel R are about the same size in my case, I'd profit from an FFT-based implementation, which I see right now is given by
scipy.signal.fftconvolve( C, R, mode='same' )
and also
scipy.stsci.convolve.convolve2d( data=C, kernel=R, mode='constant', cval=0.0, fft=1 )
The second method is much faster than the first, and as far as I can see it would spit out the the same results.
Now, when the kernel is actually larger than the data, the resulting array would have the shape of the kernel. Is there any way to restrict the computations to the size of the data? At first, I thought that was what "mode='same'"" is for. I tried cutting the extra data off of the resulting array, but I'm not quite sure which is the part that I would like to get rid of.
Any hints here?
It's always good to take a look at the source fftconvolve elif mode == "same": if product(s1,axis=0) > product(s2,axis=0): osize = s1 else: osize = s2 return _centered(ret,osize) elif mode == "valid": return _centered(ret,abs(s2-s1)+1) from scipy.signal.signaltools import _centered and it should be possible to set osize to s1 = array(in1.shape) (C in your notation) scipy.stsci has been removed recently from scipy and will not be in the scipy 0.8 version (I didn't try myself whether it works) Josef
Cheers, Nico _______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
![](https://secure.gravatar.com/avatar/500a1e2e864ae08f386afde2cd21add0.jpg?s=120&d=mm&r=g)
Thanks for the hint! I've now set "mode='full'" and extract the data myself by CR = scipy.signal.fftconvolve( C, R, mode='full' ) from scipy.signal.signaltools import _centered CR = _centered( CR , C.shape ) Cheers, Nico On Sat, Mar 20, 2010 at 1:49 PM, <josef.pktd@gmail.com> wrote:
On Sat, Mar 20, 2010 at 7:51 AM, Nico Schlömer <nico.schloemer@gmail.com> wrote:
Hi,
I'm trying to compute the the convolution if s 2D array, and I see that there are several ways in SciPy to do that. As the original data C and the kernel R are about the same size in my case, I'd profit from an FFT-based implementation, which I see right now is given by
scipy.signal.fftconvolve( C, R, mode='same' )
and also
scipy.stsci.convolve.convolve2d( data=C, kernel=R, mode='constant', cval=0.0, fft=1 )
The second method is much faster than the first, and as far as I can see it would spit out the the same results.
Now, when the kernel is actually larger than the data, the resulting array would have the shape of the kernel. Is there any way to restrict the computations to the size of the data? At first, I thought that was what "mode='same'"" is for. I tried cutting the extra data off of the resulting array, but I'm not quite sure which is the part that I would like to get rid of.
Any hints here?
It's always good to take a look at the source
fftconvolve
elif mode == "same": if product(s1,axis=0) > product(s2,axis=0): osize = s1 else: osize = s2 return _centered(ret,osize) elif mode == "valid": return _centered(ret,abs(s2-s1)+1)
from scipy.signal.signaltools import _centered
and it should be possible to set osize to s1 = array(in1.shape) (C in your notation)
scipy.stsci has been removed recently from scipy and will not be in the scipy 0.8 version
(I didn't try myself whether it works)
Josef
Cheers, Nico _______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Sat, Mar 20, 2010 at 12:07 PM, Nico Schlömer <nico.schloemer@gmail.com> wrote:
Thanks for the hint!
I've now set "mode='full'" and extract the data myself by
CR = scipy.signal.fftconvolve( C, R, mode='full' ) from scipy.signal.signaltools import _centered CR = _centered( CR , C.shape )
glad it works, I forgot to mention earlier: scipy.stsci.convolve.convolve2d uses numpy.fft.fft2 If it is much faster than the n-dimensional fft convolution, it might be worth writing a fftconvolve2 Josef
Cheers, Nico
On Sat, Mar 20, 2010 at 1:49 PM, <josef.pktd@gmail.com> wrote:
On Sat, Mar 20, 2010 at 7:51 AM, Nico Schlömer <nico.schloemer@gmail.com> wrote:
Hi,
I'm trying to compute the the convolution if s 2D array, and I see that there are several ways in SciPy to do that. As the original data C and the kernel R are about the same size in my case, I'd profit from an FFT-based implementation, which I see right now is given by
scipy.signal.fftconvolve( C, R, mode='same' )
and also
scipy.stsci.convolve.convolve2d( data=C, kernel=R, mode='constant', cval=0.0, fft=1 )
The second method is much faster than the first, and as far as I can see it would spit out the the same results.
Now, when the kernel is actually larger than the data, the resulting array would have the shape of the kernel. Is there any way to restrict the computations to the size of the data? At first, I thought that was what "mode='same'"" is for. I tried cutting the extra data off of the resulting array, but I'm not quite sure which is the part that I would like to get rid of.
Any hints here?
It's always good to take a look at the source
fftconvolve
elif mode == "same": if product(s1,axis=0) > product(s2,axis=0): osize = s1 else: osize = s2 return _centered(ret,osize) elif mode == "valid": return _centered(ret,abs(s2-s1)+1)
from scipy.signal.signaltools import _centered
and it should be possible to set osize to s1 = array(in1.shape) (C in your notation)
scipy.stsci has been removed recently from scipy and will not be in the scipy 0.8 version
(I didn't try myself whether it works)
Josef
Cheers, Nico _______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
![](https://secure.gravatar.com/avatar/500a1e2e864ae08f386afde2cd21add0.jpg?s=120&d=mm&r=g)
If it is much faster than the n-dimensional fft convolution
For me it is about 60(!) times faster, see the attached graph (mind the log scaling). I had NxN data with NxN kernels convolved. The source code for producing the figure is attached as well.
be worth writing a fftconvolve2 What remains to be checked is the ratio for the case where the kernel is a lot smaller than the data. If that turns out to be equally fast, I don't see any reason to keep the current implementation of scipy.signal.fftconvolve.
Anyway, this may also be related to that other discussion going on about FFTW. I'm not sure what the current status about FFT implementations in SciPy is, but at first glance there seem to be quite a few really, which to me seems redundant and unhelpful. Does anyone have more insight here? Cheers, Nico
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Mon, Mar 22, 2010 at 8:17 PM, Nico Schlömer <nico.schloemer@gmail.com> wrote:
If it is much faster than the n-dimensional fft convolution
For me it is about 60(!) times faster, see the attached graph (mind the log scaling). I had NxN data with NxN kernels convolved.
The source code for producing the figure is attached as well.
be worth writing a fftconvolve2 What remains to be checked is the ratio for the case where the kernel is a lot smaller than the data. If that turns out to be equally fast, I don't see any reason to keep the current implementation of scipy.signal.fftconvolve.
I'm using it (or tried to use it) also for 3d data, so we still want to keep the nd version. You could file an enhancement ticket for a fftconvolve2d A few weeks ago there was also the remark in a thread that signal.convolve2d is faster than the nd version. I'm not an image processing person, so I don't know what the optimal convolution behavior for this is, and what could be a replacement for stsci.convolve. There is also a convolution in ndimage but I have no idea about it's performance, and I don't know if scikits.image has any special convolutions.
Anyway, this may also be related to that other discussion going on about FFTW. I'm not sure what the current status about FFT implementations in SciPy is, but at first glance there seem to be quite a few really, which to me seems redundant and unhelpful.
Does anyone have more insight here?
fftw has been removed from scipy. scipy.fft and numpy.fft are plain vanilla. cheers, Josef
Cheers, Nico
_______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
![](https://secure.gravatar.com/avatar/6c4cf7b49b95a72f6cc51111b03a40ac.jpg?s=120&d=mm&r=g)
fftw has been removed from scipy. scipy.fft and numpy.fft are plain vanilla.
Interesting... Curious to know the reason... Whether for speed or logistics? ******* In any case, I suppose I still have the problem of looking for a fast convolution that I can use--with minimal effort--to replace scipy.signal.signaltools.fftconvolve. My regards, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.362392 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***************************************************************************************** To view the terms under which this email is distributed, please go to http://www.hull.ac.uk/legal/email_disclaimer.html *****************************************************************************************
![](https://secure.gravatar.com/avatar/500a1e2e864ae08f386afde2cd21add0.jpg?s=120&d=mm&r=g)
fftw has been removed from scipy. scipy.fft and numpy.fft are plain vanilla. Interesting... Curious to know the reason... Whether for speed or logistics?
I read at <http://cens.ioc.ee/~pearu/scipy/FFTPACK_NOTES.html> that SciPy's FFTPACK actually interfaces FFTW as well. Apparently they also do some speed heuristics there. -- FFTPACK devs, is that still up-to-date? --Nico
![](https://secure.gravatar.com/avatar/0c95cd90aef753bc89f1c48370eeb8b7.jpg?s=120&d=mm&r=g)
Nico Schlömer wrote:
fftw has been removed from scipy. scipy.fft and numpy.fft are plain vanilla. Interesting... Curious to know the reason... Whether for speed or logistics?
I read at <http://cens.ioc.ee/~pearu/scipy/FFTPACK_NOTES.html> that SciPy's FFTPACK actually interfaces FFTW as well.
This is not true anymore, only the original, fortran-based FFTPACK code is used by scipy.fftpack since 0.7.0, David
![](https://secure.gravatar.com/avatar/f5e2eed5819807cb22e9d04eb66ab290.jpg?s=120&d=mm&r=g)
On 03/23/10 15:51, Joseph Anderson wrote:
fftw has been removed from scipy. scipy.fft and numpy.fft are plain vanilla.
Interesting...
Curious to know the reason... Whether for speed or logistics?
Firstly fftw in scipy and numpy was fftw2 IIRC, and switching to fftw3 is a bit more involved, due to the plan semantics (I'm actually planning to implement a "old style" api, i.e. y=fft(x) in pyfftw at some point). The other thing is that fftw3 (I don't know about 2) is GPL which does not fit with scipy which is BSD-like. Note I'm just guessing as I was not involved in the decision to remove fftw from scipy.
*******
In any case, I suppose I still have the problem of looking for a fast convolution that I can use--with minimal effort--to replace scipy.signal.signaltools.fftconvolve.
My regards,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music
School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK
T: +44.(0)1723.362392 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
***************************************************************************************** To view the terms under which this email is distributed, please go to http://www.hull.ac.uk/legal/email_disclaimer.html ***************************************************************************************** _______________________________________________ SciPy-User mailing list SciPy-User@scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user
![](https://secure.gravatar.com/avatar/6c4cf7b49b95a72f6cc51111b03a40ac.jpg?s=120&d=mm&r=g)
Firstly fftw in scipy and numpy was fftw2 IIRC, and switching to fftw3 is a bit more involved, due to the plan semantics (I'm actually planning to implement a "old style" api, i.e. y=fft(x) in pyfftw at some point). The other thing is that fftw3 (I don't know about 2) is GPL which does not fit with scipy which is BSD-like. Note I'm just guessing as I was not involved in the decision to remove fftw from scipy.
So... Back to my 1st question... Is there an easy way for me to get a fast y=fft(x) on G5 PPC OSX? (BTW--last night I tried a quick enpkg upgrade on my G5 PPC OSX, with little apparent success. Need to look into that.) Thanks for your advice. My regards, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.362392 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***************************************************************************************** To view the terms under which this email is distributed, please go to http://www.hull.ac.uk/legal/email_disclaimer.html *****************************************************************************************
![](https://secure.gravatar.com/avatar/5dde29b54a3f1b76b2541d0a4a9b232c.jpg?s=120&d=mm&r=g)
Joseph Anderson wrote:
(BTW--last night I tried a quick enpkg upgrade on my G5 PPC OSX, with little apparent success. Need to look into that.)
A while back, Enthought said they were no longer properly supporting PPC :-( -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
![](https://secure.gravatar.com/avatar/6c4cf7b49b95a72f6cc51111b03a40ac.jpg?s=120&d=mm&r=g)
If it is much faster than the n-dimensional fft convolution
For me it is about 60(!) times faster, see the attached graph (mind the log scaling). I had NxN data with NxN kernels convolved.
This is interesting.
be worth writing a fftconvolve2 What remains to be checked is the ratio for the case where the kernel is a lot smaller than the data. If that turns out to be equally fast, I don't see any reason to keep the current implementation of scipy.signal.fftconvolve.
Anyway, this may also be related to that other discussion going on about FFTW. I'm not sure what the current status about FFT implementations in SciPy is, but at first glance there seem to be quite a few really, which to me seems redundant and unhelpful.
On this point, I can echo. A quick look turns up: scipy.fftpack.convolve scipy.signal.convolve scipy.signal.fftconvolve scipy.stsci.convolve numpy.convolve I've used scipy.signal.fftconvolve as that's where other signal processing tools useful to me have been found. IMHO, ideally, there would be one 'fast' convolve that does the right thing. I can understand that different convolves in different name spaces exist for historical and practical reasons. So, maybe there's an argument to keep all these numerous convolves. (Some appear to be direct, rather than fft convolves--reason enough to keep those.) Maybe a thing to do would be a page someplace clarifying which are the fastest? Or... 'rewrite' so that all use a fast convolution routine, but preserve the separate interfaces (if they exist) as necessary? ****** My regards, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.362392 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***************************************************************************************** To view the terms under which this email is distributed, please go to http://www.hull.ac.uk/legal/email_disclaimer.html *****************************************************************************************
![](https://secure.gravatar.com/avatar/500a1e2e864ae08f386afde2cd21add0.jpg?s=120&d=mm&r=g)
IMHO, ideally, there would be one 'fast' convolve that does the right thing.
I fully and wholeheartedly agree. Eventually the user shouldn't be bothered too much which implementation is actually the fastest one. And as far as I understand there are two general ways: with and without FFT, where the FFT-less implementation seems to excel when data and kernel differ very much in size. IMHO, it should be possible for the user to make this choice, overwriting the sane defaults. Unfortunately, there is no such default right now.
Or... 'rewrite' so that all use a fast convolution routine, but preserve the separate interfaces (if they exist) as necessary? Indeed. If there are five separate implementations of convolution (or FFT for that matter) within SciPy alone, this seems like a big waste of energy.
Would that be something for GSOC maybe? --Nico
participants (6)
-
Chris Barker
-
David Cournapeau
-
Jochen Schroeder
-
josef.pktd@gmail.com
-
Joseph Anderson
-
Nico Schlömer