[Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

Nathaniel Smith njs at pobox.com
Tue Oct 28 11:20:04 EDT 2014

On 28 Oct 2014 14:48, "Eelco Hoogendoorn" <hoogendoorn.eelco at gmail.com>
> If I may 'hyjack' the discussion back to the meta-point:
> should we be having this discussion on the numpy mailing list at all?

Of course we should.

> Perhaps the 'batteries included' philosophy made sense in the early days
of numpy; but given that there are several fft libraries with their own
pros and cons, and that most numpy projects will use none of them at all,
why should numpy bundle any of them?

Certainly there's a place for fancy 3rd-party fft libraries. But fft is
such a basic algorithm that it'd be silly to ask people who just need a
quick one-off fft to go evaluate a bunch of third-party libraries. For many
users, downloading one of these libraries will take longer than just doing
their Fourier transform with an O(N**2) algorithm :-). And besides that
there's tons of existing code that uses np.fft. So np.fft will continue to
exist, and given that it exists we should make it as good as we can.

> To have a scipy.linalg and scipy.fft makes sense to me, although import
pyfftw or import pyFFTPACK would arguably be better still. Just as in the
case of linear algebra, those different libraries represent meaningful
differences, and if the user wants to paper over those differences with a
named import they are always free to do so themselves, explicitly. To be
sure, the maintenance of quality fft libraries should be part of the
numpy/scipy-stack in some way or another. But I would argue that the core
thing that numpy should do is ndarrays alone.

According to some sort of abstract project planning aesthetics, perhaps.
But I don't see how fractionating numpy into lots of projects would provide
any benefit for users. (If we split numpy into 10 subprojects then probably
7 of them would never release, because we barely have the engineering to do
release management now.)

CS courses often teach that more modular = more better. That's because
they're desperate to stop newbies from creating balls of mush, though, not
because it's the whole truth :-). It's always true that an organized
codebase is better than a ball of mush, but abstraction barriers,
decoupling, etc. have real and important costs, and this needs to be taken
into account. (See e.g. the Torvalds/Tenenbaum debate.)

And in any case, this ship sailed a long time ago.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20141028/4f13a366/attachment.html>

More information about the NumPy-Discussion mailing list