Hi,

I cannot see how the following would work  when it is np.fft.fft() that takes a long time based on the length of data. In my case my data is non-periodic.
`> from numpy.fft import fft> from numpy.random import rand> from math import log, ceil> seq_A = rand(2649674)> seq_B = rand(2646070)> fft_A = fft(seq_A) #Long> fft_B = fft(seq_B)>zeropadded_fft_A = fft(seq_A, n=2**(ceil(log(len(seq_A),2))+1))>zeropadded_fft_B = fft(seq_B, n=2**(ceil(log(len(seq_B),2))+1))`
`Ideally I need to calculate a favourable length to pad, prior to calling np.fft.fft() as Stéfan pointed out by calculating the factors. In [6]: from sympy import factorint In [7]: max(factorint(2646070)) Out[7]: 367 In [8]: max(factorint(2649674)) Out[8]: 1324837 `
`I will try adding some code to calculate the next power of two above my array length as you suggest;`
`> And while you zero-pad, you can zero-pad to a sequence that is a power of two, thus preventing awkward factorizations.`
`Does numpy have an easy way to do this, i.e. for a given number, find the next highest number (within a range) that could be factored into small, prime numbers as Phil explained? It would help if it gave a list, prioritised by number of factors.`
Thanks,
Joseph

Date: Fri, 28 Aug 2015 17:26:32 -0700
From: stefanv@berkeley.edu
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] Numpy FFT.FFT slow with certain samples

On Aug 28, 2015 5:17 PM, "Pierre-Andre Noel" <noel.pierre.andre@gmail.com> wrote:
>
> I had in mind the use of FFT to do convolutions (
> https://en.wikipedia.org/wiki/Convolution_theorem ). If you do not
> zero-pad properly, then the end of the signal may "bleed" on the
> beginning, and vice versa.

Ah, gotcha! All these things should also be handled nicely in scipy.signal.fftconvolve.

Stéfan

_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion