Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

On Jun 1, 2016 4:47 PM, "David Cournapeau" <cournape@gmail.com> wrote:
On Tue, May 31, 2016 at 10:36 PM, Sturla Molden <sturla.molden@gmail.com>
wrote:
Fwiw Intel's new python distribution thing has numpy patched to use mkl for fft, and they're interested in pushing the relevant changes upstream. I have no idea how maintainable their patches are, since I haven't seen them -- this is just from taking to people here at pycon. -n

Hi all, At Continuum we are trying to coordinate with Intel about releasing our patches from Accelerate upstream as well rather than having them redo things we have already done but have just not been able to open source yet. Accelerate also uses GPU accelerated FFTs and it would be nice if there were a supported NumPy-way of plugging in these optimized approaches. This is not a trivial thing to do, though and there are a lot of design choices. We have been giving away Accelerate to academics since it was released but have asked companies to pay for it as a means of generating money to support open source. Several things that used to be in Accelerate only are now already in open-source (e.g. cuda.jit, guvectorize, target='cuda' and target='parallel' in numba.vectorize). I expect this trend will continue. The FFT enhancements are another thing that are on the list of things to make open source. I for one, welcome Intel's contributions and am enthusiastic about their joining the Python development community. In many cases it would be better if they would just pay a company that already has built and tested this capability to release it then develop things themselves yet again. Any encouragement that can be provided to Intel to encourage them in this direction would help. Many companies are now supporting open-source. Even those that sell some software are still contributing overall to ensure that the total amount of useful open-source software available is increasing. Best, -Travis On Wed, Jun 1, 2016 at 7:42 PM, Nathaniel Smith <njs@pobox.com> wrote:
-- *Travis Oliphant, PhD* *Co-founder and CEO* @teoliphant 512-222-5440 http://www.continuum.io

Hi all, At Continuum we are trying to coordinate with Intel about releasing our patches from Accelerate upstream as well rather than having them redo things we have already done but have just not been able to open source yet. Accelerate also uses GPU accelerated FFTs and it would be nice if there were a supported NumPy-way of plugging in these optimized approaches. This is not a trivial thing to do, though and there are a lot of design choices. We have been giving away Accelerate to academics since it was released but have asked companies to pay for it as a means of generating money to support open source. Several things that used to be in Accelerate only are now already in open-source (e.g. cuda.jit, guvectorize, target='cuda' and target='parallel' in numba.vectorize). I expect this trend will continue. The FFT enhancements are another thing that are on the list of things to make open source. I for one, welcome Intel's contributions and am enthusiastic about their joining the Python development community. In many cases it would be better if they would just pay a company that already has built and tested this capability to release it then develop things themselves yet again. Any encouragement that can be provided to Intel to encourage them in this direction would help. Many companies are now supporting open-source. Even those that sell some software are still contributing overall to ensure that the total amount of useful open-source software available is increasing. Best, -Travis On Wed, Jun 1, 2016 at 7:42 PM, Nathaniel Smith <njs@pobox.com> wrote:
-- *Travis Oliphant, PhD* *Co-founder and CEO* @teoliphant 512-222-5440 http://www.continuum.io
participants (2)
-
Nathaniel Smith
-
Travis Oliphant