Hi all, I need a single precision FFT from numpy. I'm willing to hack numpy's fft module to make it but before I start I'm wondering if there's any interest in including single-precision fft functions in the numpy distribution? If that's the case I'll try to make a nice patch. Anand
Anand Patil wrote:
Hi all,
I need a single precision FFT from numpy. I'm willing to hack numpy's fft module to make it but before I start I'm wondering if there's any interest in including single-precision fft functions in the numpy distribution? If that's the case I'll try to make a nice patch.
Does it need to be numpy ? Because I would love to see it in scipy, and the underlying fortran code is already there, only the wrappers need to be done. If you are not familiar with scipy code, I can tell you where to look at (the module is scipy.fftpack), David
Dang, I should have checked my email an hour ago... it doesn't need to be numpy, but I already did it. I just made a new module called 'sfft' that's a copy of fft, but with everything in single precision. Is that any use to anyone? Anand On Fri, Nov 28, 2008 at 12:37 PM, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote:
Anand Patil wrote:
Hi all,
I need a single precision FFT from numpy. I'm willing to hack numpy's fft module to make it but before I start I'm wondering if there's any interest in including single-precision fft functions in the numpy distribution? If that's the case I'll try to make a nice patch.
Does it need to be numpy ? Because I would love to see it in scipy, and the underlying fortran code is already there, only the wrappers need to be done. If you are not familiar with scipy code, I can tell you where to look at (the module is scipy.fftpack),
David _______________________________________________ Scipy-dev mailing list Scipy-dev@scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev
Anand Patil wrote:
Dang, I should have checked my email an hour ago... it doesn't need to be numpy, but I already did it. I just made a new module called 'sfft' that's a copy of fft, but with everything in single precision. Is that any use to anyone?
Hi Anand, Sorry for not having answered before. If you care about the float support being available to many people, I think the best solution really is adding it to scipy. Generally, I think there is a consensus that we would like to avoid adding new features to numpy itself, specially if the features fit quite well scipy. To add support to float support to scipy.fftpack, you need to do the following: - Enable build the fftpack library, single version (scipy/fftpack/src/fftpack) in scipy/fftpack/setup.py - start writing fftpack wrappers in C (look at zfft_pack.c and zfft.c for a simple example complex->complex fft, one dimension) - add support at python level. The 2nd step is the one which will take time, although it should be quite similar to the double prevision version. cheers, David
2008/11/30 David Cournapeau <david@ar.media.kyoto-u.ac.jp>:
Anand Patil wrote:
Dang, I should have checked my email an hour ago... it doesn't need to be numpy, but I already did it. I just made a new module called 'sfft' that's a copy of fft, but with everything in single precision. Is that any use to anyone?
Sorry for not having answered before. If you care about the float support being available to many people, I think the best solution really is adding it to scipy. Generally, I think there is a consensus that we would like to avoid adding new features to numpy itself, specially if the features fit quite well scipy.
To add support to float support to scipy.fftpack, you need to do the following: - Enable build the fftpack library, single version (scipy/fftpack/src/fftpack) in scipy/fftpack/setup.py - start writing fftpack wrappers in C (look at zfft_pack.c and zfft.c for a simple example complex->complex fft, one dimension) - add support at python level.
The 2nd step is the one which will take time, although it should be quite similar to the double prevision version.
I'd also like to suggest that, if possible, it would be nice if single-precision FFTs were not a separate module, or even a separate function, but instead the usual fft function selected them when handed a single-precision input. Anne
2008/12/1 Anne Archibald <aarchiba@physics.mcgill.ca>:
I'd also like to suggest that, if possible, it would be nice if single-precision FFTs were not a separate module, or even a separate function, but instead the usual fft function selected them when handed a single-precision input.
Has there been any progress on the single precision FFT front? It seems that this is a highly requested feature, so it should go on the TODO list for 0.8. Cheers Stéfan
Stéfan van der Walt wrote:
2008/12/1 Anne Archibald <aarchiba@physics.mcgill.ca>:
I'd also like to suggest that, if possible, it would be nice if single-precision FFTs were not a separate module, or even a separate function, but instead the usual fft function selected them when handed a single-precision input.
Has there been any progress on the single precision FFT front?
Not that I am aware of - for sure nothing has been committed to the scipy tree. There was some discussion recently about something interested in doing the work in numpy - but I think this should be done in scipy (to avoid putting more code in numpy - and because the underliong fortran code is already there for single precision in scipy case). For anyone moderately familiar with f2py and numpy, it would not take much time, since it would mostly be a copy and paste of the existing code for the low-level wrapper, and a bit careful work to avoid breaking any public API for the high-level wrapper, cheers, David
2009/1/6 David Cournapeau <david@ar.media.kyoto-u.ac.jp>:
Has there been any progress on the single precision FFT front?
Not that I am aware of - for sure nothing has been committed to the scipy tree. There was some discussion recently about something interested in doing the work in numpy - but I think this should be done in scipy (to avoid putting more code in numpy - and because the underliong fortran code is already there for single precision in scipy case).
For anyone moderately familiar with f2py and numpy, it would not take much time, since it would mostly be a copy and paste of the existing code for the low-level wrapper, and a bit careful work to avoid breaking any public API for the high-level wrapper,
I think this would make a very good GSOC project. I heard that some students were interested in working on Scipy. Regards Stéfan
Has there been any progress on the single precision FFT front? For anyone moderately familiar with f2py and numpy, it would not take much time, since it would mostly be a copy and paste of the existing code for the low-level wrapper, and a bit careful work to avoid breaking any public API for the high-level wrapper,
I think this would make a very good GSOC project. I heard that some students were interested in working on Scipy.
If this is going to be scheduled for the 0.8 release, I think I can try to add the single precision fft wrappers in the next months. I'll report here if things get more complicated than expected! ciao, tiziano
Tiziano Zito wrote:
Has there been any progress on the single precision FFT front?
For anyone moderately familiar with f2py and numpy, it would not take much time, since it would mostly be a copy and paste of the existing code for the low-level wrapper, and a bit careful work to avoid breaking any public API for the high-level wrapper,
I think this would make a very good GSOC project. I heard that some students were interested in working on Scipy.
If this is going to be scheduled for the 0.8 release, I think I can try to add the single precision fft wrappers in the next months.
I got bored during a meeting: single precision is now implemented in scipy.fftpack for complex fft (1d and nd) and 'real' fft (that is real input -> hermitian output for rfft). It should be transparent, e.g: from scipy.fftpack import fft, rfft import numpy as np fft(np.random.randn(10)).dtype == np.cdouble fft(np.random.randn(10).astype(np.complex64)).dtype == np.complex64 Note that not everything works yet - float32 input to fft does not work yet, for example (it is upcasted to double precision, as before). It is not super-tested either (not less than the double versions, though). cheers, David
On Wed, Jan 7, 2009 at 10:45, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote:
I got bored during a meeting: single precision is now implemented in scipy.fftpack for complex fft (1d and nd) and 'real' fft (that is real input -> hermitian output for rfft).
Hear, hear! Lets schedule you some more meetings! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
2009/1/7 Robert Kern <robert.kern@gmail.com>:
On Wed, Jan 7, 2009 at 10:45, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote:
I got bored during a meeting: single precision is now implemented in scipy.fftpack for complex fft (1d and nd) and 'real' fft (that is real input -> hermitian output for rfft).
Hear, hear! Lets schedule you some more meetings!
Make sure they're adequately boring. Nice work! Anne
Anne Archibald wrote:
Make sure they're adequately boring.
Note that I did not say they were boring, but shamefully, my Japanese is sill poor enough such as formal presentations mostly elude me :) I fixed fft such as the complex transform handles complex numbers with 0 imaginary output - effectively making fft of float32 array work in single precision. I also discovered at the same time that fftpack has some cosine and sine transforms, which is great - I will be able to implement some missing functions compared to the matlab signal toolbox easily with this. I discovered something weird - though unrelated: real fft (rfft) in numpy and scipy are different in their output format (numpy puts only one 'side' of the hermitian output - scipy put all the number in an array of reals, which is a bit strange). I don't think the scipy format makes much sense ? In C, I can see the advantage of having input/output of the same type/size, but in python, I am not sure. Modifying this would be a major breakage, though, cheers, David
On Thu, Jan 8, 2009 at 1:53 AM, David Cournapeau < david@ar.media.kyoto-u.ac.jp> wrote:
Anne Archibald wrote:
Make sure they're adequately boring.
Note that I did not say they were boring, but shamefully, my Japanese is sill poor enough such as formal presentations mostly elude me :)
I fixed fft such as the complex transform handles complex numbers with 0 imaginary output - effectively making fft of float32 array work in single precision. I also discovered at the same time that fftpack has some cosine and sine transforms, which is great - I will be able to implement some missing functions compared to the matlab signal toolbox easily with this.
I discovered something weird - though unrelated: real fft (rfft) in numpy and scipy are different in their output format (numpy puts only one 'side' of the hermitian output - scipy put all the number in an array of reals, which is a bit strange). I don't think the scipy format makes much sense ? In C, I can see the advantage of having input/output of the same type/size, but in python, I am not sure. Modifying this would be a major breakage, though,
There was a lot of discussion about this several years back. The "natural" way for the real transform to store the results of an in place transform is with the DC and Nyquist frequencies, which are both reals, stored as the real and imaginary parts of the DC value. This keeps the input and output arrays the same size but it is somewhat unnatural from the users point of view. The values may also be shuffled so that the results appear in real, complex, ... , complex, real order. I forget which way fftpack does it. I prefer the numpy way for higher level usage, especially as it works better with ndarrays. Now that fftpack is the only fft version in scipy and there is only one version of the real transform to deal with it might be a good time to revisit the question and settle things before we hit version 1.0. Chuck
2009/1/7 David Cournapeau <david@ar.media.kyoto-u.ac.jp>:
I got bored during a meeting: single precision is now implemented in scipy.fftpack for complex fft (1d and nd) and 'real' fft (that is real input -> hermitian output for rfft). It should be transparent, e.g:
And here I was thinking this could be a GSoC task -- I should recalibrate my estimator when it comes to bored Frenchmen in Japan! Thanks, David. Cheers Stéfan
On Wed, Jan 7, 2009 at 4:45 PM, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote:
I got bored during a meeting: single precision is now implemented in scipy.fftpack for complex fft (1d and nd) and 'real' fft (that is real input -> hermitian output for rfft). It should be transparent, e.g:
from scipy.fftpack import fft, rfft import numpy as np
fft(np.random.randn(10)).dtype == np.cdouble fft(np.random.randn(10).astype(np.complex64)).dtype == np.complex64
Excellent! I'd like to bring the the docstrings up to date on this as well. "If the input is single-precision (real or complex), the output will be single-precision also" or do you have other suggestions?
David
-- Tom Grydeland <Tom.Grydeland@(gmail.com)>
On Fri, Nov 28, 2008 at 12:37:49PM +0000, Anand Patil wrote:
I need a single precision FFT from numpy. I'm willing to hack numpy's fft module to make it but before I start I'm wondering if there's any interest in including single-precision fft functions in the numpy distribution? If that's the case I'll try to make a nice patch.
I'd really love it. More generally, I'd really be interested in having all the numpy function (including the linalg ones) work with single precision, as I my main limitation on the work I am doing right now is memory. I did have a quick look, and it seemed to me that this was not always possible, due to the underlying fortran libraries. Gaël
participants (10)
-
Anand Patil -
Anne Archibald -
Anne Archibald -
Charles R Harris -
David Cournapeau -
Gael Varoquaux -
Robert Kern -
Stéfan van der Walt -
Tiziano Zito -
Tom Grydeland