[Numpy-discussion] Changing FFT cache to a bounded LRU cache

Lion Krischer lion.krischer at gmail.com
Mon May 30 05:26:16 EDT 2016



On 30/05/16 10:07, Joseph Martinot-Lagarde wrote:
> Marten van Kerkwijk <m.h.vankerkwijk <at> gmail.com> writes:
> 
>> I did a few simple timing tests (see comment in PR), which suggests it is
> hardly worth having the cache. Indeed, if one really worries about speed,
> one should probably use pyFFTW (scipy.fft is a bit faster too, but at least
> for me the way real FFT values are stored is just too inconvenient). So, my
> suggestion would be to do away with the cache altogether. 


I added a slightly more comprehensive benchmark to the PR. Please have a
look. It tests the total time for 100 FFTs with and without cache. It is
over 30 percent faster with cache which it totally worth it in my
opinion as repeated FFTs of the same size are a very common use case.

Also many people will not have enough knowledge to use FFTW or some
other FFT implementation.



More information about the NumPy-Discussion mailing list