
Hi Alexander, On 2024-03-14 22:43:38, Alexander Levin via NumPy-Discussion <numpy-discussion@python.org> wrote:
Memory Usage - https://github.com/2D-FFT-Project/2d-fft/blob/testnotebook/notebooks/memory_... Timing comparisons(updated) - https://github.com/2D-FFT-Project/2d-fft/blob/testnotebook/notebooks/compari...
I see these timings are still done only for power-of-two shaped arrays. This is the easiest case to optimize, and I wonder if you've given further thought to supporting other sizes? PocketFFT, e.g., implements the Bluestein / Chirp-Z algorithm to deal with cases where the sizes have large prime factors. Your test matrix also only contains real values. In that case, you can use rfft, which might resolve the memory usage difference? I'd be surprized if PocketFFT uses that much more memory for the same calculation. I saw that in the notebook code you have: matr = np.zeros((n, m), dtype=np.complex64) matr = np.random.rand(n, m) Was the intent here to generate a complex random matrix? Stéfan