Hi,
I did a few simple timing tests (see comment in PR), which suggests it is hardly worth having the cache. Indeed, if one really worries about speed, one should probably use pyFFTW (scipy.fft is a bit faster too, but at least for me the way real FFT values are stored is just too inconvenient). So, my suggestion would be to do away with the cache altogether.
If we do keep it, I think the approach in the PR is nice, but I would advocate setting both a size and number limit (e.g., by default no more than 8 entries or so, which should cover most repetitive use cases).
All the best,