All, I have been looking around for chirp-z code with a useable license. There is the original fortran version by Rader et. al. out there, as well as a package from FreeBSD at http://ftp2.at.freebsd.org/pub/FreeBSD/distfiles/fxt-2006.12.17.tgz. The latter is c++ and relies on function signatures to distinguish various functions with the same name, but looks pretty easy to translate. I note that we could probably use an fast_correlate also. Is there any interest in adding these to the fft pack? Or do they properly belong in scipy? Chuck
Hi,
Isn't the chirp transform only two cross-correlations ? And for a fast one,
there is a module in SciPy, and I think that kind of operation belongs more
to Scipy than Numpy ;)
Matthieu
2007/5/27, Charles R Harris
All,
I have been looking around for chirp-z code with a useable license. There is the original fortran version by Rader et. al. out there, as well as a package from FreeBSD at http://ftp2.at.freebsd.org/pub/FreeBSD/distfiles/fxt-2006.12.17.tgz. The latter is c++ and relies on function signatures to distinguish various functions with the same name, but looks pretty easy to translate. I note that we could probably use an fast_correlate also. Is there any interest in adding these to the fft pack? Or do they properly belong in scipy?
Chuck
_______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Sorry, not cross-correlation, convolution, too early in the morning here...
2007/5/27, Matthieu Brucher
Hi,
Isn't the chirp transform only two cross-correlations ? And for a fast one, there is a module in SciPy, and I think that kind of operation belongs more to Scipy than Numpy ;)
Matthieu
2007/5/27, Charles R Harris
: All,
I have been looking around for chirp-z code with a useable license. There is the original fortran version by Rader et. al. out there, as well as a package from FreeBSD at http://ftp2.at.freebsd.org/pub/FreeBSD/distfiles/fxt-2006.12.17.tgz. The latter is c++ and relies on function signatures to distinguish various functions with the same name, but looks pretty easy to translate. I note that we could probably use an fast_correlate also. Is there any interest in adding these to the fft pack? Or do they properly belong in scipy?
Chuck
_______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
On 5/27/07, Matthieu Brucher
Hi,
Isn't the chirp transform only two cross-correlations ? And for a fast one, there is a module in SciPy, and I think that kind of operation belongs more to Scipy than Numpy ;)
Umm, no, There really aren't any transparent fast fft convolutions in SciPy. The closest thing is in signaltools, fftconvolve, and if you ask it to convolve, say, sequences whose length add up to 7902, then it will do a size 7901 transform. Because 7901 is prime this takes about 300 times as long as a transform of size 8192. That glitch could be fixed, but I think something as basic as fftconvolve should reside at a higher level than scipy.signalsanyway, say in numpy.fft. There are other scipy functions that do convolution, but those that use an fft are limited. There is a (buggy) version for 2d arrays in stsci, and a version with limited (real) functionality in fftpack. I don't see any more. Chuck
Umm, no,
There really aren't any transparent fast fft convolutions in SciPy. The closest thing is in signaltools, fftconvolve, and if you ask it to convolve, say, sequences whose length add up to 7902, then it will do a size 7901 transform. Because 7901 is prime this takes about 300 times as long as a transform of size 8192. That glitch could be fixed, but I think something as basic as fftconvolve should reside at a higher level than scipy.signalsanyway, say in numpy.fft.
Why not scipy.fft then ? It is currently reviewed. There are other scipy functions that do convolution, but those that use an
fft are limited. There is a (buggy) version for 2d arrays in stsci, and a version with limited (real) functionality in fftpack. I don't see any more.
Matthieu
Charles R Harris wrote:
On 5/27/07, *Matthieu Brucher*
mailto:matthieu.brucher@gmail.com> wrote: Hi,
Isn't the chirp transform only two cross-correlations ? And for a fast one, there is a module in SciPy, and I think that kind of operation belongs more to Scipy than Numpy ;)
Umm, no,
There really aren't any transparent fast fft convolutions in SciPy. The closest thing is in signaltools, fftconvolve, and if you ask it to convolve, say, sequences whose length add up to 7902, then it will do a size 7901 transform. Because 7901 is prime this takes about 300 times as long as a transform of size 8192.
That glitch could be fixed, but I think something as basic as fftconvolve should reside at a higher level than scipy.signals anyway, say in numpy.fft. numpy is for low level things. The problem of fft is that good, general fft algorithms (which work for any size in O(NlogN)) are not that common, and not easily distributable. scipy.fft does have code to do
There is only one fft implementation in numpy, and I don't think it works (eg it is not O(NlogN)) for prime numbers. that, though, if you have one of the required library (mkl, fftw both provide NlogN behaviour for any size). Now, for convolution, you could use 0 padding, but this kind of thing may be too high level for numpy (which should be kept to the minimum: the goal of numpy really is to provide ndarray and basic).
There are other scipy functions that do convolution, but those that use an fft are limited. There is a (buggy) version for 2d arrays in stsci, and a version with limited (real) functionality in fftpack. I don't see any more.
Could you open a ticket on scipy trac with an example which shows the bug, and explain what you want ? I may take a look at it (I have some code to do cross correlation in numpy somewhere, and could take time to improve its quality for inclusion in scipy). David
There really aren't any transparent fast fft convolutions in SciPy. The closest thing is in signaltools, fftconvolve, and if you ask it to convolve, say, sequences whose length add up to 7902, then it will do a size 7901 transform.
BTW, is this really a glitch ? I think there is two schools there : - those who think the software must do something - those who think it is the programmer's responsibility. If I give to a fftconvolve sequences that add up this way, I want it to use a 7901 transform, not a 2^n transform. So you understood I'm in the second group. If people want to use a convolution with fft, they _should_ know a little about this algorithm. I'm a moderator on a French programming forum, and in the algorithm area, there are people that even don't know the FFT algorithm that want to make complicated thing with it, it is bound to fail. I suppose that indicating this "limitation" in the docstring is enough, so that people make the decision of is it enough or not Matthieu
Matthieu Brucher wrote:
There really aren't any transparent fast fft convolutions in SciPy. The closest thing is in signaltools, fftconvolve, and if you ask it to convolve, say, sequences whose length add up to 7902, then it will do a size 7901 transform.
BTW, is this really a glitch ? I think there is two schools there : - those who think the software must do something - those who think it is the programmer's responsibility.
If I give to a fftconvolve sequences that add up this way, I want it to use a 7901 transform, not a 2^n transform. So you understood I'm in the second group.
If people want to use a convolution with fft, they _should_ know a little about this algorithm. I generally agree with you, specially for a package aimed at scientific (we are suppose to think, after all), but in this precise case, I think
I may miss something, but in theory at least, using zeros padding and fft of size 2^N (8192 here) will give you exactly the same result that doing fft of size 7901 if done right. Then, there are some precision problems which may appear, but I don't think they are significant for common cases. For example, I know for sure that matlab implements fast convolution/correlation this way (eg in lpc code, for example). there is a valid case for exception, specially since it has almost no cost
I'm a moderator on a French programming forum, and in the algorithm area, there are people that even don't know the FFT algorithm that want to make complicated thing with it, it is bound to fail. I suppose that indicating this "limitation" in the docstring is enough, so that people make the decision of is it enough or not If someone does use fft but does not know it is better to use 2^n, will he take a look at the docstrings :) ?
David
I'm a moderator on a French programming forum, and in the algorithm area, there are people that even don't know the FFT algorithm that want to make complicated thing with it, it is bound to fail. I suppose that indicating this "limitation" in the docstring is enough, so that people make the decision of is it enough or not If someone does use fft but does not know it is better to use 2^n, will he take a look at the docstrings :) ?
Well, if he's French, he won't look ;) Matthieu
On Sat, May 26, 2007 at 10:37:41PM -0600, Charles R Harris wrote:
I have been looking around for chirp-z code with a useable license. There is the original fortran version by Rader et. al. out there, as well as a package from FreeBSD at http://ftp2.at.freebsd.org/pub/FreeBSD/distfiles/fxt-2006.12.17.tgz. The latter is c++ and relies on function signatures to distinguish various functions with the same name, but looks pretty easy to translate. I note that we could probably use an fast_correlate also. Is there any interest in adding these to the fft pack? Or do they properly belong in scipy?
The thread here: http://www.mail-archive.com/numpy-discussion@scipy.org/msg01812.html contains an implementation. Since it uses the FFT to do its underlying calculations, it's pretty fast. Cheers Stéfan
On Mon, May 28, 2007 at 03:15:33PM +0900, David Cournapeau wrote:
Matthieu Brucher wrote:
There really aren't any transparent fast fft convolutions in SciPy. The closest thing is in signaltools, fftconvolve, and if you ask it to convolve, say, sequences whose length add up to 7902, then it will do a size 7901 transform.
BTW, is this really a glitch ? I think there is two schools there : - those who think the software must do something - those who think it is the programmer's responsibility.
If I give to a fftconvolve sequences that add up this way, I want it to use a 7901 transform, not a 2^n transform. So you understood I'm in the second group.
I may miss something, but in theory at least, using zeros padding and fft of size 2^N (8192 here) will give you exactly the same result that doing fft of size 7901 if done right. Then, there are some precision problems which may appear, but I don't think they are significant for common cases. For example, I know for sure that matlab implements fast convolution/correlation this way (eg in lpc code, for example).
There is also the problem that you suddenly double your memory usage. Stéfan
Stefan van der Walt wrote:
There is also the problem that you suddenly double your memory usage.
Is it a problem for 1d signals ? I cannot think about an example where you would need to do fft long enough such as this is a problem for the applications I know of fft. Also, if you do several 1d fft of the same size, in most cases I know, you can pack several of them together, and do "vectorized" fft, which will not take the double of memory. Now, I know there are many applications where you use fft that I don't know about, and this may be a problem for multi dimensional problems. But then, the cost in direct implementation would be really slow (I am of course talking of cases where you need the full convolution/correlation; if you need only a part of it, then it is a totally different matter, and I don't think there is a good way to know automatically what's best; I had to code my own lpc code for this exact reason in matlab). I think the problem of correlation and convolution right now that there are several implementations of them in different places, which is a bit confusing a first (at least it was for me when I started using scipy). If I do a grep on convolution for numpy and scipy sources, I get: - numpy/core/numeric : implemented using correlation, which itself uses direct implementation (in C, with dot). - scipy.stsci and scipy.ndimage: I have never used those packages. - scipy.fftpack : uses fft, obviously. - scipy.signal : proposes both with and without fft. Maybe there would be a way to avoid so many packages to provide more or less the same functionalities ? To provide a fast, and without too much memory overhead correlation function was one of the main reason for the ticket 400 I opened in numpy trac (provide a hook to an fft implementation at the C level). cheers, David
participants (4)
-
Charles R Harris
-
David Cournapeau
-
Matthieu Brucher
-
Stefan van der Walt