
Hey, The performance of fftpack depends very strongly on the array size -- sizes that are powers of two are good, but also powers of three, five and seven, or numbers whose only prime factors are from (2,3,5,7). For problems that can use padding, rounding up the size (using np.fft.fft(x, n=size_with_padding)) to one of these multiples makes a big difference. Some other packages expose a function for calculating the next fast size, e.g: http://ltfat.sourceforge.net/notes/ltfatnote017.pdf. Is there anything like this in numpy/scipy? If not, would this be a reasonable feature to add? -Robert

Is there anything like this in numpy/scipy? If not, would this be a
reasonable feature to add?
-Robert
AFAIK the SciPy scipy.fftpack.fft function automatically does this. The function that calculates the next closest "composite number" isn't exposed, but any size you input to fftpack will automatically be resized to the next largest composite numbers for fast FFTs.
Thanks, Krishna

On 24/12/14 14:25, Sri Krishna wrote:
AFAIK the SciPy scipy.fftpack.fft function automatically does this. The function that calculates the next closest "composite number" isn't exposed, but any size you input to fftpack will automatically be resized to the next largest composite numbers for fast FFTs.
No, it factorizes the FFT into efficient chunks (size 2, 3, 4, or 5), and if the FFT size factorizes into larger primes it uses an O(N**2) DFT for those chunks. It allows us to compute FFTs of any size, but it will not always be efficient. E.g. compare an FFT of size 4099 (prime) and compare with an FFT of size 4096 (2**12). In[10]: a = np.zeros(4099*100) In[11]: %timeit scipy.fftpack.fft(a) 1 loops, best of 3: 615 ms per loop In[12]: a = np.zeros(4096*100) In[13]: %timeit scipy.fftpack.fft(a) 100 loops, best of 3: 8.64 ms per loop Sturla
participants (3)
-
Robert McGibbon
-
Sri Krishna
-
Sturla Molden