Unpleasant behavior with poly1d and numpy scalar multiplication
Hi all, consider this little script: from numpy import poly1d, float, float32 p=poly1d([1.,2.]) three=float(3) three32=float32(3) print 'three*p:',three*p print 'three32*p:',three32*p print 'p*three32:',p*three32 which produces when run: In [3]: run pol1d.py three*p: 3 x + 6 three32*p: [ 3. 6.] p*three32: 3 x + 6 The fact that multiplication between poly1d objects and numbers is:  noncommutative when the numbers are numpy scalars  different for the same number if it is a python float vs a numpy scalar is rather unpleasant, and I can see this causing hard to find bugs, depending on whether your code gets a parameter that came as a python float or a numpy one. This was found today by a colleague on numpy 1.0.4.dev3937. It feels like a bug to me, do others agree? Or is it consistent with a part of the zen of numpy I've missed thus far? Thanks, f
On 7/31/07, Fernando Perez
Hi all,
consider this little script:
from numpy import poly1d, float, float32 p=poly1d([1.,2.]) three=float(3) three32=float32(3)
print 'three*p:',three*p print 'three32*p:',three32*p print 'p*three32:',p*three32
which produces when run:
In [3]: run pol1d.py three*p: 3 x + 6 three32*p: [ 3. 6.] p*three32: 3 x + 6
The fact that multiplication between poly1d objects and numbers is:
 noncommutative when the numbers are numpy scalars  different for the same number if it is a python float vs a numpy scalar
is rather unpleasant, and I can see this causing hard to find bugs, depending on whether your code gets a parameter that came as a python float or a numpy one.
This was found today by a colleague on numpy 1.0.4.dev3937. It feels like a bug to me, do others agree? Or is it consistent with a part of the zen of numpy I've missed thus far?
It looks like a bug to me, but it also looks like it's going to be tricky to fix. What looks like is going on is that float32.__mul__ is called first. For some reason it calls poly1d.__array__. If one comments out __array__ it ends up doing something odd with __iter__ and __len__ and spitting out a different wrong answer. If both of those are removed, this script works OK. My guess is that this is the scalar object being too clever, but it might just be a bad interaction between the scalar object and poly1d. Poly1d has a lot of, perhaps too much, trickiness.  . __ . \ . . tim.hochberg@ieee.org
Mmh, today I got bitten by this again. It took me a while to figure
out what was going on while trying to construct a pedagogical example
manipulating numpy poly1d objects, and after searching for 'poly1d
multiplication float' in my gmail inbox, the *only* post I found was
this old one of mine, so I guess I'll just resuscitate it:
On Tue, Jul 31, 2007 at 2:54 PM, Fernando Perez
Hi all,
consider this little script:
from numpy import poly1d, float, float32 p=poly1d([1.,2.]) three=float(3) three32=float32(3)
print 'three*p:',three*p print 'three32*p:',three32*p print 'p*three32:',p*three32
which produces when run:
In [3]: run pol1d.py three*p: 3 x + 6 three32*p: [ 3. 6.] p*three32: 3 x + 6
The fact that multiplication between poly1d objects and numbers is:
 noncommutative when the numbers are numpy scalars  different for the same number if it is a python float vs a numpy scalar
is rather unpleasant, and I can see this causing hard to find bugs, depending on whether your code gets a parameter that came as a python float or a numpy one.
This was found today by a colleague on numpy 1.0.4.dev3937. It feels like a bug to me, do others agree? Or is it consistent with a part of the zen of numpy I've missed thus far?
Tim H. mentioned how it might be tricky to fix. I'm wondering if there are any new ideas since on this front, because it's really awkward to explain to new students that poly1d objects have this kind of odd behavior regarding operations with scalars. The same underlying problem happens for addition, but in this case the answer (depending on the order of operations) changes even more: In [560]: p Out[560]: poly1d([ 1., 2.]) In [561]: print(p) 1 x + 2 In [562]: p+3 Out[562]: poly1d([ 1., 5.]) In [563]: p+three32 Out[563]: poly1d([ 1., 5.]) In [564]: three32+p Out[564]: array([ 4., 5.]) # !!! I'm ok with teaching students that in floating point, basic algebraic operations may not be exactly associative and that ignoring this fact can lead to nasty surprises. But explaining that a+b and b+a give completely different *types* of answer is kind of defeating my 'python is the simple language you want to learn' :) Is this really unfixable, or does one of our resident gurus have some ideas on how to approach the problem? Thanks! f
On Sat, Feb 13, 2010 at 3:11 AM, Fernando Perez
Mmh, today I got bitten by this again. It took me a while to figure out what was going on while trying to construct a pedagogical example manipulating numpy poly1d objects, and after searching for 'poly1d multiplication float' in my gmail inbox, the *only* post I found was this old one of mine, so I guess I'll just resuscitate it:
On Tue, Jul 31, 2007 at 2:54 PM, Fernando Perez
wrote: Hi all,
consider this little script:
from numpy import poly1d, float, float32 p=poly1d([1.,2.]) three=float(3) three32=float32(3)
print 'three*p:',three*p print 'three32*p:',three32*p print 'p*three32:',p*three32
which produces when run:
In [3]: run pol1d.py three*p: 3 x + 6 three32*p: [ 3. 6.] p*three32: 3 x + 6
The fact that multiplication between poly1d objects and numbers is:
 noncommutative when the numbers are numpy scalars  different for the same number if it is a python float vs a numpy scalar
is rather unpleasant, and I can see this causing hard to find bugs, depending on whether your code gets a parameter that came as a python float or a numpy one.
This was found today by a colleague on numpy 1.0.4.dev3937. It feels like a bug to me, do others agree? Or is it consistent with a part of the zen of numpy I've missed thus far?
Tim H. mentioned how it might be tricky to fix. I'm wondering if there are any new ideas since on this front, because it's really awkward to explain to new students that poly1d objects have this kind of odd behavior regarding operations with scalars.
The same underlying problem happens for addition, but in this case the answer (depending on the order of operations) changes even more:
In [560]: p Out[560]: poly1d([ 1., 2.])
In [561]: print(p)
1 x + 2
In [562]: p+3 Out[562]: poly1d([ 1., 5.])
In [563]: p+three32 Out[563]: poly1d([ 1., 5.])
In [564]: three32+p Out[564]: array([ 4., 5.]) # !!!
I'm ok with teaching students that in floating point, basic algebraic operations may not be exactly associative and that ignoring this fact can lead to nasty surprises. But explaining that a+b and b+a give completely different *types* of answer is kind of defeating my 'python is the simple language you want to learn' :)
Is this really unfixable, or does one of our resident gurus have some ideas on how to approach the problem?
From several recent discussion about selecting which method is called, it looks like multiplication and addition could easily be fixed by adding a higher __array_priority__ to poly1d. I didn't see any __array_priority__ specified in class poly1d(object)
For the discussion about fixing equal, notequal or whichever other methods cannot be changed by __array_priority__ , I haven't seen any solution. (but maybe I'm wrong) Josef
Thanks!
f _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Sat, Feb 13, 2010 at 1:41 AM,
On Sat, Feb 13, 2010 at 3:11 AM, Fernando Perez
wrote: Mmh, today I got bitten by this again. It took me a while to figure out what was going on while trying to construct a pedagogical example manipulating numpy poly1d objects, and after searching for 'poly1d multiplication float' in my gmail inbox, the *only* post I found was this old one of mine, so I guess I'll just resuscitate it:
On Tue, Jul 31, 2007 at 2:54 PM, Fernando Perez
wrote: Hi all,
consider this little script:
from numpy import poly1d, float, float32 p=poly1d([1.,2.]) three=float(3) three32=float32(3)
print 'three*p:',three*p print 'three32*p:',three32*p print 'p*three32:',p*three32
which produces when run:
In [3]: run pol1d.py three*p: 3 x + 6 three32*p: [ 3. 6.] p*three32: 3 x + 6
The fact that multiplication between poly1d objects and numbers is:
 noncommutative when the numbers are numpy scalars  different for the same number if it is a python float vs a numpy scalar
is rather unpleasant, and I can see this causing hard to find bugs, depending on whether your code gets a parameter that came as a python float or a numpy one.
This was found today by a colleague on numpy 1.0.4.dev3937. It feels like a bug to me, do others agree? Or is it consistent with a part of the zen of numpy I've missed thus far?
Tim H. mentioned how it might be tricky to fix. I'm wondering if there are any new ideas since on this front, because it's really awkward to explain to new students that poly1d objects have this kind of odd behavior regarding operations with scalars.
The same underlying problem happens for addition, but in this case the answer (depending on the order of operations) changes even more:
In [560]: p Out[560]: poly1d([ 1., 2.])
In [561]: print(p)
1 x + 2
In [562]: p+3 Out[562]: poly1d([ 1., 5.])
In [563]: p+three32 Out[563]: poly1d([ 1., 5.])
In [564]: three32+p Out[564]: array([ 4., 5.]) # !!!
I'm ok with teaching students that in floating point, basic algebraic operations may not be exactly associative and that ignoring this fact can lead to nasty surprises. But explaining that a+b and b+a give completely different *types* of answer is kind of defeating my 'python is the simple language you want to learn' :)
Is this really unfixable, or does one of our resident gurus have some ideas on how to approach the problem?
From several recent discussion about selecting which method is called, it looks like multiplication and addition could easily be fixed by adding a higher __array_priority__ to poly1d. I didn't see any __array_priority__ specified in class poly1d(object)
For the discussion about fixing equal, notequal or whichever other methods cannot be changed by __array_priority__ , I haven't seen any solution.
(but maybe I'm wrong)
Josef
Thanks!
f _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Sat, Feb 13, 2010 at 1:11 AM, Fernando Perez
Mmh, today I got bitten by this again. It took me a while to figure out what was going on while trying to construct a pedagogical example manipulating numpy poly1d objects, and after searching for 'poly1d multiplication float' in my gmail inbox, the *only* post I found was this old one of mine, so I guess I'll just resuscitate it:
On Tue, Jul 31, 2007 at 2:54 PM, Fernando Perez
wrote: Hi all,
consider this little script:
from numpy import poly1d, float, float32 p=poly1d([1.,2.]) three=float(3) three32=float32(3)
print 'three*p:',three*p print 'three32*p:',three32*p print 'p*three32:',p*three32
which produces when run:
In [3]: run pol1d.py three*p: 3 x + 6 three32*p: [ 3. 6.] p*three32: 3 x + 6
The fact that multiplication between poly1d objects and numbers is:
 noncommutative when the numbers are numpy scalars  different for the same number if it is a python float vs a numpy scalar
is rather unpleasant, and I can see this causing hard to find bugs, depending on whether your code gets a parameter that came as a python float or a numpy one.
This was found today by a colleague on numpy 1.0.4.dev3937. It feels like a bug to me, do others agree? Or is it consistent with a part of the zen of numpy I've missed thus far?
Tim H. mentioned how it might be tricky to fix. I'm wondering if there are any new ideas since on this front, because it's really awkward to explain to new students that poly1d objects have this kind of odd behavior regarding operations with scalars.
The same underlying problem happens for addition, but in this case the answer (depending on the order of operations) changes even more:
In [560]: p Out[560]: poly1d([ 1., 2.])
In [561]: print(p)
1 x + 2
In [562]: p+3 Out[562]: poly1d([ 1., 5.])
In [563]: p+three32 Out[563]: poly1d([ 1., 5.])
In [564]: three32+p Out[564]: array([ 4., 5.]) # !!!
I'm ok with teaching students that in floating point, basic algebraic operations may not be exactly associative and that ignoring this fact can lead to nasty surprises. But explaining that a+b and b+a give completely different *types* of answer is kind of defeating my 'python is the simple language you want to learn' :)
Is this really unfixable, or does one of our resident gurus have some ideas on how to approach the problem?
The new polynomials don't have that problem. In [1]: from numpy.polynomial import Polynomial as Poly In [2]: p = Poly([1,2]) In [3]: 3*p Out[3]: Polynomial([ 3., 6.], [1., 1.]) In [4]: p*3 Out[4]: Polynomial([ 3., 6.], [1., 1.]) In [5]: float32(3)*p Out[5]: Polynomial([ 3., 6.], [1., 1.]) In [6]: p*float32(3) Out[6]: Polynomial([ 3., 6.], [1., 1.]) In [7]: 3.*p Out[7]: Polynomial([ 3., 6.], [1., 1.]) In [8]: p*3. Out[8]: Polynomial([ 3., 6.], [1., 1.]) In [9]: p + float32(3) Out[9]: Polynomial([ 4., 2.], [1., 1.]) In [10]: float32(3) + p Out[10]: Polynomial([ 4., 2.], [1., 1.]) They are only in the removed 1.4 release, unfortunately. You could just pull that folder and run them as a separate module. They do have a problem with ndarrays behaving differently on the left and right, but __array_priority__ can be use to fix that. I haven't made that last fix because I'm not quite sure how I want them to behave. Chuck
On Sat, Feb 13, 2010 at 10:34 AM, Charles R Harris
The new polynomials don't have that problem.
In [1]: from numpy.polynomial import Polynomial as Poly
In [2]: p = Poly([1,2])
Aha, great! Many thanks, I can tell my students this, and just show them the caveat of calling float(x) on any scalar they want to use with the 'old' ones for now. I remember being excited about your work on the new Polys, but since I'm teaching with stock 1.3, I hadn't found them recently and just forgot about them. Excellent. One minor suggestion: I think it would be useful to have the new polys have some form of prettyprinting like the old ones. It is actually useful when working, to verify what one has at hand, to see an expanded printout like the old ones do: In [26]: p_old = numpy.poly1d([3, 2, 1]) In [27]: p_old Out[27]: poly1d([3, 2, 1]) In [28]: print(p_old) 2 3 x + 2 x + 1 Just yesterday I was validating some code against a symbolic construction with sympy, and it was handy to prettyprint them; I also think it makes them much easier to grasp for students new to the tools. In any case, thanks both for the tip and especially the code contribution! Cheers, f
On Sat, Feb 13, 2010 at 10:04 AM, Fernando Perez
On Sat, Feb 13, 2010 at 10:34 AM, Charles R Harris
wrote: The new polynomials don't have that problem.
In [1]: from numpy.polynomial import Polynomial as Poly
In [2]: p = Poly([1,2])
Aha, great! Many thanks, I can tell my students this, and just show them the caveat of calling float(x) on any scalar they want to use with the 'old' ones for now.
I remember being excited about your work on the new Polys, but since I'm teaching with stock 1.3, I hadn't found them recently and just forgot about them. Excellent.
One minor suggestion: I think it would be useful to have the new polys have some form of prettyprinting like the old ones. It is actually useful when working, to verify what one has at hand, to see an expanded printout like the old ones do:
I thought about that, but decided it was best left to a derived class, say PrettyPoly ;) Overriding __repr__ and __str__ is an example where inheritance makes sense.
In [26]: p_old = numpy.poly1d([3, 2, 1])
In [27]: p_old Out[27]: poly1d([3, 2, 1])
In [28]: print(p_old) 2 3 x + 2 x + 1
Just yesterday I was validating some code against a symbolic construction with sympy, and it was handy to prettyprint them; I also think it makes them much easier to grasp for students new to the tools.
In any case, thanks both for the tip and especially the code contribution!
Cheers,
Chuck
On Sat, Feb 13, 2010 at 10:24 AM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Sat, Feb 13, 2010 at 10:04 AM, Fernando Perez
wrote: On Sat, Feb 13, 2010 at 10:34 AM, Charles R Harris
wrote: The new polynomials don't have that problem.
In [1]: from numpy.polynomial import Polynomial as Poly
In [2]: p = Poly([1,2])
Aha, great! Many thanks, I can tell my students this, and just show them the caveat of calling float(x) on any scalar they want to use with the 'old' ones for now.
I remember being excited about your work on the new Polys, but since I'm teaching with stock 1.3, I hadn't found them recently and just forgot about them. Excellent.
One minor suggestion: I think it would be useful to have the new polys have some form of prettyprinting like the old ones. It is actually useful when working, to verify what one has at hand, to see an expanded printout like the old ones do:
I thought about that, but decided it was best left to a derived class, say PrettyPoly ;) Overriding __repr__ and __str__ is an example where inheritance makes sense.
Hmm, and on testing it looks like maybe "isinstance" should be replaced with "type(s) is x" to avoid the leftright confusion when mixing derived classes with the base class. Binary operators play havoc with inheritance. <snip> Chuck
On Sat, Feb 13, 2010 at 12:24 PM, Charles R Harris
One minor suggestion: I think it would be useful to have the new polys have some form of prettyprinting like the old ones. It is actually useful when working, to verify what one has at hand, to see an expanded printout like the old ones do:
I thought about that, but decided it was best left to a derived class, say PrettyPoly ;) Overriding __repr__ and __str__ is an example where inheritance makes sense.
I disagree, I think one of the advantages of having both str and repr is precisely to make it easy to have both a terse, implementationoriented representation and a more humanfriendly one out of the box. I don't like using 'training wheels' classes, people tend to learn one thing and use it for a long time, so I think objects should be as fully usable as possible from the getgo. I suspect I wouldn't use/teach a PrettyPoly if it existed. But it's ultimately your call. In any case, many thanks for the code! Best, f
On Sat, Feb 13, 2010 at 8:02 PM, Fernando Perez
On Sat, Feb 13, 2010 at 12:24 PM, Charles R Harris
wrote: One minor suggestion: I think it would be useful to have the new polys have some form of prettyprinting like the old ones. It is actually useful when working, to verify what one has at hand, to see an expanded printout like the old ones do:
I thought about that, but decided it was best left to a derived class, say PrettyPoly ;) Overriding __repr__ and __str__ is an example where inheritance makes sense.
I disagree, I think one of the advantages of having both str and repr is precisely to make it easy to have both a terse, implementationoriented representation and a more humanfriendly one
Note that ipython calls __repr__ to print the output. __repr__ is supposed to provide a string that can be used to recreate the object, a pretty printed version of __repr__ doesn't provide that. Also, an array or list of polynomials, having pretty printed entries looks pretty ugly with the newlines and all  try it with Poly1d. I was also thinking that someone might want to provide a better display at some point, drawing on a canvas, for instance. And what happens when the degree gets up over 100, which is quite reasonable with the Cheybshev polynomials?
out of the box. I don't like using 'training wheels' classes, people tend to learn one thing and use it for a long time, so I think objects should be as fully usable as possible from the getgo. I suspect I wouldn't use/teach a PrettyPoly if it existed.
I thought the pretty print in the original was intended as a teaching aid, but I didn't think it was a good interface for programming work. That said, I could add a pretty print option, or a pretty print function. I would be happy to provide another method that ipython could look for and call for pretty printing if that seems reasonable to you.
But it's ultimately your call. In any case, many thanks for the code!
Chuck
On Sat, Feb 13, 2010 at 8:32 PM, Charles R Harris
wrote:
On Sat, Feb 13, 2010 at 8:02 PM, Fernando Perez
wrote: On Sat, Feb 13, 2010 at 12:24 PM, Charles R Harris
wrote: One minor suggestion: I think it would be useful to have the new polys have some form of prettyprinting like the old ones. It is actually useful when working, to verify what one has at hand, to see an expanded printout like the old ones do:
I thought about that, but decided it was best left to a derived class, say PrettyPoly ;) Overriding __repr__ and __str__ is an example where inheritance makes sense.
I disagree, I think one of the advantages of having both str and repr is precisely to make it easy to have both a terse, implementationoriented representation and a more humanfriendly one
Note that ipython calls __repr__ to print the output. __repr__ is supposed to provide a string that can be used to recreate the object, a pretty printed version of __repr__ doesn't provide that. Also, an array or list of polynomials, having pretty printed entries looks pretty ugly with the newlines and all  try it with Poly1d. I was also thinking that someone might want to provide a better display at some point, drawing on a canvas, for instance. And what happens when the degree gets up over 100, which is quite reasonable with the Cheybshev polynomials?
Example:
a array([ 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3, 2 1 x + 2 x + 3], dtype=object) print a [ 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3 2 1 x + 2 x + 3]
Chuck
On Sat, Feb 13, 2010 at 10:32 PM, Charles R Harris
Note that ipython calls __repr__ to print the output. __repr__ is supposed to provide a string that can be used to recreate the object, a pretty printed version of __repr__ doesn't provide that. Also, an array or list of
IPython calls repr because that's the convention the standard python shell uses, and I decided long ago to follow suit.
polynomials, having pretty printed entries looks pretty ugly with the newlines and all  try it with Poly1d. I was also thinking that someone might want to provide a better display at some point, drawing on a canvas, for instance. And what happens when the degree gets up over 100, which is quite reasonable with the Cheybshev polynomials?
sympy has pretty remarkable prettyprinting support, perhaps some of that could be reused. Just a thought. I do agree that 2d printing is tricky, but it doesn't mean it's useless. For long and complicated expressions, getting the layout correct is not trivial. But even good ole' poly1d's display is actually useful for small polynomials, which can aid if one is debugging a more complex code with test cases that lead to small polys. I realize this isn't always viable, but it does happen in practice. But again, small nits, otherwise happy :) So if you don't see it as useful or don't have the time/interest, no worries. I don't see it as important enough to work on it myself, so I'm not going to complain further either :)
out of the box. I don't like using 'training wheels' classes, people tend to learn one thing and use it for a long time, so I think objects should be as fully usable as possible from the getgo. I suspect I wouldn't use/teach a PrettyPoly if it existed.
I thought the pretty print in the original was intended as a teaching aid, but I didn't think it was a good interface for programming work. That said, I could add a pretty print option, or a pretty print function. I would be happy to provide another method that ipython could look for and call for pretty printing if that seems reasonable to you.
In IPython we're already shipping the 'pretty' extension: http://bazaar.launchpad.net/~ipythondev/ipython/trunk/annotate/head%3A/IPyt... So I guess we could just start adding __pretty__ to certain objects for such fancy representations. Cheers, f
On Sun, Feb 14, 2010 at 12:10 AM, Fernando Perez
On Sat, Feb 13, 2010 at 10:32 PM, Charles R Harris
wrote: Note that ipython calls __repr__ to print the output. __repr__ is supposed to provide a string that can be used to recreate the object, a pretty printed version of __repr__ doesn't provide that. Also, an array or list of
IPython calls repr because that's the convention the standard python shell uses, and I decided long ago to follow suit.
polynomials, having pretty printed entries looks pretty ugly with the newlines and all  try it with Poly1d. I was also thinking that someone might want to provide a better display at some point, drawing on a canvas, for instance. And what happens when the degree gets up over 100, which is quite reasonable with the Cheybshev polynomials?
sympy has pretty remarkable prettyprinting support, perhaps some of that could be reused. Just a thought.
I do agree that 2d printing is tricky, but it doesn't mean it's useless. For long and complicated expressions, getting the layout correct is not trivial.
But even good ole' poly1d's display is actually useful for small polynomials, which can aid if one is debugging a more complex code with test cases that lead to small polys. I realize this isn't always viable, but it does happen in practice.
But again, small nits, otherwise happy :) So if you don't see it as useful or don't have the time/interest, no worries. I don't see it as important enough to work on it myself, so I'm not going to complain further either :)
out of the box. I don't like using 'training wheels' classes, people tend to learn one thing and use it for a long time, so I think objects should be as fully usable as possible from the getgo. I suspect I wouldn't use/teach a PrettyPoly if it existed.
I thought the pretty print in the original was intended as a teaching aid, but I didn't think it was a good interface for programming work. That said, I could add a pretty print option, or a pretty print function. I would be happy to provide another method that ipython could look for and call for pretty printing if that seems reasonable to you.
In IPython we're already shipping the 'pretty' extension:
http://bazaar.launchpad.net/~ipythondev/ipython/trunk/annotate/head%3A/IPyt...http://bazaar.launchpad.net/%7Eipythondev/ipython/trunk/annotate/head%3A/IP...
So I guess we could just start adding __pretty__ to certain objects for such fancy representations.
That's what I was looking for. I see that it works for python >= 2.4 with some work. Does it work for python 3.1 also? Chuck
On Sun, Feb 14, 2010 at 2:17 AM, Charles R Harris
That's what I was looking for. I see that it works for python >= 2.4 with some work. Does it work for python 3.1 also?
I haven't tried, but a quick scan of the code makes me think it would be pretty easy to port it to 3.1. It's all fairly straightforward code, so a 2to3 pass might be sufficient. Cheers, f
On Sat, Feb 13, 2010 at 11:10 PM, Fernando Perez
On Sat, Feb 13, 2010 at 10:32 PM, Charles R Harris
wrote: Note that ipython calls __repr__ to print the output. __repr__ is supposed to provide a string that can be used to recreate the object, a pretty printed version of __repr__ doesn't provide that. Also, an array or list of
IPython calls repr because that's the convention the standard python shell uses, and I decided long ago to follow suit.
polynomials, having pretty printed entries looks pretty ugly with the newlines and all  try it with Poly1d. I was also thinking that someone might want to provide a better display at some point, drawing on a canvas, for instance. And what happens when the degree gets up over 100, which is quite reasonable with the Cheybshev polynomials?
sympy has pretty remarkable prettyprinting support, perhaps some of that could be reused. Just a thought.
Curious: how is sympy at deducing recursion relations and/or index functions? Reason: my first thought about Chuck's highdegree issue was that in such cases perhaps PrettyPoly (or __pretty__) could attempt to use summation notation (of course, this would only be useful when the coefficients are formulaic functions of the index, but hey, it's something). DG
On Sun, Feb 14, 2010 at 2:40 AM, David Goldsmith
Curious: how is sympy at deducing recursion relations and/or index functions? Reason: my first thought about Chuck's highdegree issue was that in such cases perhaps PrettyPoly (or __pretty__) could attempt to use summation notation (of course, this would only be useful when the coefficients are formulaic functions of the index, but hey, it's something).
I don't think it has any such support, but I could be wrong. Cheers, f
Charles R Harris
I was also thinking that someone might want to provide a better display at some point, drawing on a canvas, for instance. And what happens when the degree gets up over 100, which is quite reasonable with the Cheybshev polynomials?
There may well be better ways to do it but I've found the following function to be quite handy for visualising latex equations: def eqview(expr,fontsize=28,dpi=80): IS_INTERACTIVE = is_interactive() try: interactive(False) fig = figure(dpi=dpi, facecolor='w') h = figtext(0.5, 0.5, latex, fontsize = fontsize, horizontalalignment = 'center', verticalalignment = 'center') bbox = h.get_window_extent(RendererAgg(15,15,dpi)) fig.set_size_inches(1.1*bbox.width/dpi, 1.25*bbox.height/dpi) show() finally: interactive(IS_INTERACTIVE) NB: Sympy provides the latex function to convert the equation objects into latex as well as other ways to display the objects in the sympy.printing module. It shouldn't be too hard to do something similar if someone was so inclined! HTH, Dave
participants (6)

Charles R Harris

Dave Hirschfeld

David Goldsmith

Fernando Perez

josef.pktd＠gmail.com

Timothy Hochberg