On Sat, Feb 13, 2010 at 1:11 AM, Fernando Perez <fperez.net@gmail.com>wrote:
Mmh, today I got bitten by this again. It took me a while to figure out what was going on while trying to construct a pedagogical example manipulating numpy poly1d objects, and after searching for 'poly1d multiplication float' in my gmail inbox, the *only* post I found was this old one of mine, so I guess I'll just resuscitate it:
On Tue, Jul 31, 2007 at 2:54 PM, Fernando Perez <fperez.net@gmail.com> wrote:
Hi all,
consider this little script:
from numpy import poly1d, float, float32 p=poly1d([1.,2.]) three=float(3) three32=float32(3)
print 'three*p:',three*p print 'three32*p:',three32*p print 'p*three32:',p*three32
which produces when run:
In [3]: run pol1d.py three*p: 3 x + 6 three32*p: [ 3. 6.] p*three32: 3 x + 6
The fact that multiplication between poly1d objects and numbers is:
- non-commutative when the numbers are numpy scalars - different for the same number if it is a python float vs a numpy scalar
is rather unpleasant, and I can see this causing hard to find bugs, depending on whether your code gets a parameter that came as a python float or a numpy one.
This was found today by a colleague on numpy 1.0.4.dev3937. It feels like a bug to me, do others agree? Or is it consistent with a part of the zen of numpy I've missed thus far?
Tim H. mentioned how it might be tricky to fix. I'm wondering if there are any new ideas since on this front, because it's really awkward to explain to new students that poly1d objects have this kind of odd behavior regarding operations with scalars.
The same underlying problem happens for addition, but in this case the answer (depending on the order of operations) changes even more:
In [560]: p Out[560]: poly1d([ 1., 2.])
In [561]: print(p)
1 x + 2
In [562]: p+3 Out[562]: poly1d([ 1., 5.])
In [563]: p+three32 Out[563]: poly1d([ 1., 5.])
In [564]: three32+p Out[564]: array([ 4., 5.]) # !!!
I'm ok with teaching students that in floating point, basic algebraic operations may not be exactly associative and that ignoring this fact can lead to nasty surprises. But explaining that a+b and b+a give completely different *types* of answer is kind of defeating my 'python is the simple language you want to learn' :)
Is this really unfixable, or does one of our resident gurus have some ideas on how to approach the problem?
The new polynomials don't have that problem. In [1]: from numpy.polynomial import Polynomial as Poly In [2]: p = Poly([1,2]) In [3]: 3*p Out[3]: Polynomial([ 3., 6.], [-1., 1.]) In [4]: p*3 Out[4]: Polynomial([ 3., 6.], [-1., 1.]) In [5]: float32(3)*p Out[5]: Polynomial([ 3., 6.], [-1., 1.]) In [6]: p*float32(3) Out[6]: Polynomial([ 3., 6.], [-1., 1.]) In [7]: 3.*p Out[7]: Polynomial([ 3., 6.], [-1., 1.]) In [8]: p*3. Out[8]: Polynomial([ 3., 6.], [-1., 1.]) In [9]: p + float32(3) Out[9]: Polynomial([ 4., 2.], [-1., 1.]) In [10]: float32(3) + p Out[10]: Polynomial([ 4., 2.], [-1., 1.]) They are only in the removed 1.4 release, unfortunately. You could just pull that folder and run them as a separate module. They do have a problem with ndarrays behaving differently on the left and right, but __array_priority__ can be use to fix that. I haven't made that last fix because I'm not quite sure how I want them to behave. Chuck