I'm thinking (again) about using numpy for signal processing applications. One issue is that there are more data types that are commonly used in signal processing that are not available in numpy (or python). Specifically, it is frequently required to convert floating point algorithms into integer algorithms. numpy is fine for arrays of integers (of various sizes), but it is also very useful to have arrays of complex<integers>. While numpy has complex<double,float>, it doesn't have complex<int,int_64...> Has anyone thought about this?
On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
I'm thinking (again) about using numpy for signal processing applications. One issue is that there are more data types that are commonly used in signal processing that are not available in numpy (or python). Specifically, it is frequently required to convert floating point algorithms into integer algorithms. numpy is fine for arrays of integers (of various sizes), but it is also very useful to have arrays of complex<integers>. While numpy has complex<double,float>, it doesn't have complex<int,int_64...> Has anyone thought about this?
A bit. Multiplication begins to be a problem, though. Would you also want fixed point multiplication with scaling, a la PPC with altivec? What about division? So on and so forth. I think something like this would best be implemented in a specialized signal processing package but I am not sure of the best way to do it. Chuck _______________________________________________
Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Charles R Harris wrote:
On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
I'm thinking (again) about using numpy for signal processing applications. One issue is that there are more data types that are commonly used in signal processing that are not available in numpy (or python). Specifically, it is frequently required to convert floating point algorithms into integer algorithms. numpy is fine for arrays of integers (of various sizes), but it is also very useful to have arrays of complex<integers>. While numpy has complex<double,float>, it doesn't have complex<int,int_64...> Has anyone thought about this?
A bit. Multiplication begins to be a problem, though. Would you also want fixed point multiplication with scaling, a la PPC with altivec? What about division? So on and so forth. I think something like this would best be implemented in a specialized signal processing package but I am not sure of the best way to do it.
I'd keep things as simple as possible. No fixed point/scaling. It's simple enough to explictly rescale things as you wish. That is (using c++ syntax): complex<int> a, b; complex<int> c = a * b; complex<int> d = d >> 4; Complicating life is interoperability (conversion) of types. I've used this concept for some years with c++/python - but not with numpy. It's pretty trivial to make a complex<int> type as a C extension to python. Adding this to numpy would be really useful.
On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
Charles R Harris wrote:
On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
I'm thinking (again) about using numpy for signal processing applications. One issue is that there are more data types that are commonly used in signal processing that are not available in numpy (or python). Specifically, it is frequently required to convert floating point algorithms into integer algorithms. numpy is fine for arrays of
integers
(of various sizes), but it is also very useful to have arrays of complex<integers>. While numpy has complex<double,float>, it doesn't have complex<int,int_64...> Has anyone thought about this?
A bit. Multiplication begins to be a problem, though. Would you also want fixed point multiplication with scaling, a la PPC with altivec? What about division? So on and so forth. I think something like this would best be implemented in a specialized signal processing package but I am not sure of the best way to do it.
I'd keep things as simple as possible. No fixed point/scaling. It's simple enough to explictly rescale things as you wish.
That is (using c++ syntax): complex<int> a, b; complex<int> c = a * b; complex<int> d = d >> 4;
Complicating life is interoperability (conversion) of types.
I've used this concept for some years with c++/python - but not with numpy. It's pretty trivial to make a complex<int> type as a C extension to python. Adding this to numpy would be really useful.
How about fiddling with floating point to emulate integers by subclassing ndarray? That wouldn't buy you the speed and size advantage of true fixed point but would make a flexible emulator. Which raises the question, what are your main goals in using such a data type? Not that I don't see the natural utility of having complex integer numbers (Gaussian integers), but if you are trying to emulate hardware something more flexible might be appropriate. Chuck
Charles R Harris wrote:
On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
Charles R Harris wrote:
On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
I'm thinking (again) about using numpy for signal processing applications. One issue is that there are more data types that are commonly used in signal processing that are not available in numpy (or python). Specifically, it is frequently required to convert floating point algorithms into integer algorithms. numpy is fine for arrays of
integers
(of various sizes), but it is also very useful to have arrays of complex<integers>. While numpy has complex<double,float>, it doesn't have complex<int,int_64...> Has anyone thought about this?
A bit. Multiplication begins to be a problem, though. Would you also want fixed point multiplication with scaling, a la PPC with altivec? What about division? So on and so forth. I think something like this would best be implemented in a specialized signal processing package but I am not sure of the best way to do it.
I'd keep things as simple as possible. No fixed point/scaling. It's simple enough to explictly rescale things as you wish.
That is (using c++ syntax): complex<int> a, b; complex<int> c = a * b; complex<int> d = d >> 4;
Complicating life is interoperability (conversion) of types.
I've used this concept for some years with c++/python - but not with numpy. It's pretty trivial to make a complex<int> type as a C extension to python. Adding this to numpy would be really useful.
How about fiddling with floating point to emulate integers by subclassing ndarray? That wouldn't buy you the speed and size advantage of true fixed point but would make a flexible emulator. Which raises the question, what are your main goals in using such a data type? Not that I don't see the natural utility of having complex integer numbers (Gaussian integers), but if you are trying to emulate hardware something more flexible might be appropriate.
Chuck
Yes, this is intended for modelling hardware. I don't know what you mean by "more flexible". I design my hardware algorithms to use integer arithmetic. What did you have in mind?
On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
Charles R Harris wrote:
On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
Charles R Harris wrote:
On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
I'm thinking (again) about using numpy for signal processing applications. One issue is that there are more data types that are commonly used in signal processing that are not available in numpy
(or
python). Specifically, it is frequently required to convert floating point algorithms into integer algorithms. numpy is fine for arrays of integers (of various sizes), but it is also very useful to have arrays of complex<integers>. While numpy has complex<double,float>, it doesn't have complex<int,int_64...> Has anyone thought about this?
A bit. Multiplication begins to be a problem, though. Would you also want fixed point multiplication with scaling, a la PPC with altivec? What about division? So on and so forth. I think something like this would best be implemented in a specialized signal processing package but I am not sure of the best way to do it.
I'd keep things as simple as possible. No fixed point/scaling. It's simple enough to explictly rescale things as you wish.
That is (using c++ syntax): complex<int> a, b; complex<int> c = a * b; complex<int> d = d >> 4;
Complicating life is interoperability (conversion) of types.
I've used this concept for some years with c++/python - but not with numpy. It's pretty trivial to make a complex<int> type as a C extension to python. Adding this to numpy would be really useful.
How about fiddling with floating point to emulate integers by subclassing ndarray? That wouldn't buy you the speed and size advantage of true fixed point but would make a flexible emulator. Which raises the question, what are your main goals in using such a data type? Not that I don't see the natural utility of having complex integer numbers (Gaussian integers), but if you are trying to emulate hardware something more flexible might be appropriate.
Chuck
Yes, this is intended for modelling hardware. I don't know what you mean by "more flexible". I design my hardware algorithms to use integer arithmetic. What did you have in mind?
Well, you could do bounds and overflow checking, use mixed integer precisions, and also deal with oddball sizes, such as 12 bits. You might also support 16 bit floats, something that is not available as a native type on most platforms. I'm just tossing some ideas out there. As I say, I don't see any reason not to have integer complex arrays, if only for the data type. Then again, maybe you could define your own data type and just overload the operators, something like In [5]: complex32 = dtype('int16,int16') In [6]: zeros(2, dtype=complex32) Out[6]: array([(0, 0), (0, 0)], dtype=[('f0', '<i2'), ('f1', '<i2')]) I suspect that the tools you want for emulation and debugging would be hard to get with exact replication. Chuck
participants (2)
-
Charles R Harris
-
Neal Becker