[Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

Nathaniel Smith njs at pobox.com
Tue Oct 28 00:28:37 EDT 2014

On 28 Oct 2014 04:07, "Matthew Brett" <matthew.brett at gmail.com> wrote:
> Hi,
> On Mon, Oct 27, 2014 at 8:07 PM, Sturla Molden <sturla.molden at gmail.com>
> > Sturla Molden <sturla.molden at gmail.com> wrote:
> >
> >> If we really need a
> >> kick-ass fast FFT we need to go to libraries like FFTW, Intel MKL or
> >> Apple's Accelerate Framework,
> >
> > I should perhaps also mention FFTS here, which claim to be faster than
> > and has a BSD licence:
> >
> > http://anthonix.com/ffts/index.html
> Nice.  And a funny New Zealand name too.
> Is this an option for us?  Aren't we a little behind the performance
> curve on FFT after we lost FFTW?

It's definitely attractive. Some potential issues that might need dealing
with, based on a quick skim:

- seems to have a hard requirement for a processor supporting SSE, AVX, or
NEON. No fallback for old CPUs or other architectures. (I'm not even sure
whether it has x86-32 support.)

-  no runtime CPU detection, e.g. SSE vs AVX appears to be a compile time

- not sure if it can handle non-power-of-two problems at all, or at all
efficiently. (FFTPACK isn't great here either but major regressions would
be bad.)

- not sure if it supports all the modes we care about (e.g. rfft)

This stuff is all probably solveable though, so if someone has a hankering
to make numpy (or scipy) fft dramatically faster then you should get in
touch with the author and see what they think.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20141028/c07bef51/attachment.html>

More information about the NumPy-Discussion mailing list