On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
Charles R Harris wrote:

> On 10/5/07, Neal Becker <ndbecker2@gmail.com> wrote:
>>
>> I'm thinking (again) about using numpy for signal processing
>> applications. One issue is that there are more data types that are
>> commonly used in signal processing that are not available in numpy (or
>> python). Specifically, it is frequently required to convert floating
>> point
>> algorithms into integer algorithms.  numpy is fine for arrays of integers
>> (of various sizes), but it is also very useful to have arrays of
>> complex<integers>.  While numpy has complex<double,float>, it doesn't
>> have
>> complex<int,int_64...>  Has anyone thought about this?
>
>
> A bit. Multiplication begins to be a problem, though. Would you also want
> fixed point multiplication with scaling, a la PPC with altivec? What about
> division? So on and so forth. I think something like this would best be
> implemented in a specialized signal processing package but I am not sure
> of the best way to do it.
>

I'd keep things as simple as possible.  No fixed point/scaling.  It's simple
enough to explictly rescale things as you wish.

That is (using c++ syntax):
complex<int> a, b;
complex<int> c = a * b;
complex<int> d = d >> 4;

Complicating life is interoperability (conversion) of types.

I've used this concept for some years with c++/python - but not with numpy.
It's pretty trivial to make a complex<int> type as a C extension to python.
Adding this to numpy would be really useful.

How about fiddling with floating point to emulate integers by subclassing ndarray? That wouldn't buy you the speed and size advantage of true fixed point but would make a flexible emulator. Which raises the question, what are your main goals in using such a data type? Not that I don't see the natural utility of having complex integer numbers (Gaussian integers), but if you are trying to emulate hardware something more flexible might be appropriate.

Chuck