On 7/31/07, Fernando Perez <fperez.net@gmail.com> wrote:
Hi all,

consider this little script:

from numpy import poly1d, float, float32
p=poly1d([1.,2.])
three=float(3)
three32=float32(3)

print 'three*p:',three*p
print 'three32*p:',three32*p
print 'p*three32:',p*three32


which produces when run:

In [3]: run pol1d.py
three*p:
3 x + 6
three32*p: [ 3.  6.]
p*three32:
3 x + 6


The fact that multiplication between poly1d objects and numbers is:

- non-commutative when the numbers are numpy scalars
- different for the same number if it is a python float vs a numpy scalar

is rather unpleasant, and I can see this causing hard to find bugs,
depending on whether your code gets a parameter that came as a python
float or a numpy one.

This was found today by a colleague on numpy 1.0.4.dev3937.   It feels
like a bug to me, do others agree? Or is it consistent with a part of
the zen of numpy I've missed thus far?


It looks like a bug to me, but it also looks like it's going to be tricky to fix. What looks like is going on is that float32.__mul__ is called first. For some reason it calls poly1d.__array__. If one comments out __array__ it ends up doing something odd with __iter__ and __len__ and spitting out a different wrong answer. If both of those are removed, this script works OK.

My guess is that this is the scalar object being too clever, but it might just be a bad interaction between the scalar object and poly1d. Poly1d has a lot of, perhaps too much, trickiness.



--
.  __
.   |-\
.
.  tim.hochberg@ieee.org