[Numpy-discussion] Changing FFT cache to a bounded LRU cache

Travis Oliphant travis at continuum.io
Thu Jun 2 00:52:08 EDT 2016


Hi all,

At Continuum we are trying to coordinate with Intel about releasing our
patches from Accelerate upstream as well rather than having them redo
things we have already done but have just not been able to open source yet.


Accelerate also uses GPU accelerated FFTs and it would be nice if there
were a supported NumPy-way of plugging in these optimized approaches.
 This is not a trivial thing to do, though and there are a lot of design
choices.

We have been giving away Accelerate to academics since it was released but
have asked companies to pay for it as a means of generating money to
support open source.    Several things that used to be in Accelerate only
are now already in open-source (e.g. cuda.jit, guvectorize, target='cuda'
and target='parallel' in numba.vectorize).     I expect this trend will
continue.   The FFT enhancements are another thing that are on the list of
things to make open source.

I for one, welcome Intel's contributions and am enthusiastic about their
joining the Python development community.   In many cases it would be
better if they would just pay a company that already has built and tested
this capability to release it then develop things themselves yet again.
 Any encouragement that can be provided to Intel to encourage them in this
direction would help.

Many companies are now supporting open-source.   Even those that sell some
software are still contributing overall to ensure that the total amount of
useful open-source software available is increasing.

Best,

-Travis





On Wed, Jun 1, 2016 at 7:42 PM, Nathaniel Smith <njs at pobox.com> wrote:

> On Jun 1, 2016 4:47 PM, "David Cournapeau" <cournape at gmail.com> wrote:
> >
> >
> >
> > On Tue, May 31, 2016 at 10:36 PM, Sturla Molden <sturla.molden at gmail.com>
> wrote:
> >>
> >> Joseph Martinot-Lagarde <contrebasse at gmail.com> wrote:
> >>
> >> > The problem with FFTW is that its license is more restrictive (GPL),
> and
> >> > because of this may not be suitable everywhere numpy.fft is.
> >>
> >> A lot of us use NumPy linked with MKL or Accelerate, both of which have
> >> some really nifty FFTs. And the license issue is hardly any worse than
> >> linking with them for BLAS and LAPACK, which we do anyway. We could
> extend
> >> numpy.fft to use MKL or Accelerate when they are available.
> >
> >
> > That's what we used to do in scipy, but it was a PITA to maintain.
> Contrary to blas/lapack, fft does not have a standard API, hence exposing a
> consistent API in python, including data layout involved quite a bit of
> work.
> >
> > It is better to expose those through 3rd party APIs.
>
> Fwiw Intel's new python distribution thing has numpy patched to use mkl
> for fft, and they're interested in pushing the relevant changes upstream.
>
> I have no idea how maintainable their patches are, since I haven't seen
> them -- this is just from taking to people here at pycon.
>
> -n
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>


-- 

*Travis Oliphant, PhD*
*Co-founder and CEO*


@teoliphant
512-222-5440
http://www.continuum.io
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20160601/f758028b/attachment.html>


More information about the NumPy-Discussion mailing list