[Numpy-discussion] chirp-z

David Cournapeau david at ar.media.kyoto-u.ac.jp
Mon May 28 02:15:33 EDT 2007


Matthieu Brucher wrote:
>
>     There really aren't any transparent fast fft convolutions in
>     SciPy. The closest thing is in signaltools, fftconvolve, and if
>     you ask it to convolve, say, sequences whose length add up to
>     7902, then it will do a size 7901 transform.
>
>
> BTW, is this really a glitch ? I think there is two schools there :
> - those who think the software must do something
> - those who think it is the programmer's responsibility.
>
> If I give to a fftconvolve sequences that add up this way, I want it 
> to use a 7901 transform, not a 2^n transform. So you understood I'm in 
> the second group.
I may miss something, but in theory at least, using zeros padding and 
fft of size 2^N (8192 here) will give you exactly the same result that 
doing fft of size 7901 if done right. Then, there are some precision 
problems which may appear, but I don't think they are significant for 
common cases. For example, I know for sure that matlab implements fast 
convolution/correlation this way (eg in lpc code, for example).

> If people want to use a convolution with fft, they _should_ know a 
> little about this algorithm.
I generally agree with you, specially for a package aimed at scientific 
(we are suppose to think, after all), but in this precise case, I think 
there is a valid case for exception, specially since it has almost no cost
> I'm a moderator on a French programming forum, and in the algorithm 
> area, there are people that even don't know the FFT algorithm that 
> want to make complicated thing with it, it is bound to fail. I suppose 
> that indicating this "limitation" in the docstring is enough, so that 
> people make the decision of is it enough or not
If someone does use fft but does not know it is better to use 2^n, will 
he take a look at the docstrings :) ?

David



More information about the NumPy-Discussion mailing list