I'm not sure whether this is a Numpy problem or a Boost problem, so I'm posting to both communities.
In old Numeric, type(sqrt(5.5)) was float, but in numpy, type(sqrt(5.5)) is numpy.float64. This leads to a big performance hit in calculations in a beta version of VPython, using the VPython 3D "vector" class, compared with the old version that used Numeric (VPython is a 3D graphics module for Python; see vpython.org).
Operator overloading of the VPython vector class works fine for vector*sqrt(5.5) but not for sqrt(5.5)*vector. The following free function catches 5.5*vector but fails to catch sqrt(5.5)*vector, whose type ends up as numpy.ndarray instead of the desired vector, with concomitant slow conversions in later vector calculations:
inline vector operator*( const double& s, const vector& v) { return vector( s*v.x, s*v.y, s*v.z); }
I've thrashed around on this, including trying to add this:
inline vector operator*( const npy_float64& s, const vector& v) { return vector( s*v.x, s*v.y, s*v.z); }
But the compiler correctly complains that this is in conflict with the version of double*vector, since in fact npy_float64 is actually double.
It's interesting and presumably meaningful to the knowledgeable (not me) that vector*sqrt(5.5) yields a vector, even though the overloading speaks of double, not a specifically numpy name:
inline vector operator*( const double s) const throw() { return vector( s*x, s*y, s*z); }
VPython uses Boost, and the glue concerning vectors includes the following:
py::class_<vector>("vector", py::init< py::optional<double, double, double> >()) .def( self * double()) .def( double() * self)
As far as I can understand from the Boost Python documentation, this is the proper way to specify the lefthand and righthand overloadings. But do I have to add something like .def( npy_float64() * self)? Help would be much appreciated.
Bruce Sherwood
Sorry to repeat myself and be insistent, but could someone please at least comment on whether I'm doing anything obviously wrong, even if you don't immediately have a solution to my serious problem? There was no response to my question (see copy below) which I sent to both the numpy and Boost mailing lists.
To the numpy experts: Is there something wrong, or something I could/should change in how I'm trying to overload multiplication of a numpy square root (or other numpy function) times my own "vector" object? I'm seeing a huge performance hit in going from Numeric to numpy because Numeric sqrt returned float whereas numpy sqrt returns numpy.float64, so that the result is not one of my vector objects. I don't have a problem with myvector*sqrt(5.5).
Desperately,
Bruce Sherwood
 I'm not sure whether this is a Numpy problem or a Boost problem, so I'm posting to both communities. (I'm subscribed to both lists, but an attempt to post yesterday to this Boost list seems never have gotten to the archives, so I'm trying again. My apologies if this shows up twice here.)
In old Numeric, type(sqrt(5.5)) was float, but in numpy, type(sqrt(5.5)) is numpy.float64. This leads to a big performance hit in calculations in a beta version of VPython, using the VPython 3D "vector" class, compared with the old version that used Numeric (VPython is a 3D graphics module for Python; see vpython.org).
Operator overloading of the VPython vector class works fine for vector*sqrt(5.5) but not for sqrt(5.5)*vector. The following free function catches 5.5*vector but fails to catch sqrt(5.5)*vector, whose type ends up as numpy.ndarray instead of the desired vector, with concomitant slow conversions in later vector calculations:
inline vector operator*( const double& s, const vector& v) { return vector( s*v.x, s*v.y, s*v.z); }
I've thrashed around on this, including trying to add this:
inline vector operator*( const npy_float64& s, const vector& v) { return vector( s*v.x, s*v.y, s*v.z); }
But the compiler correctly complains that this is in conflict with the version of double*vector, since in fact npy_float64 is actually double.
It's interesting and presumably meaningful to the knowledgeable (not me) that vector*sqrt(5.5) yields a vector, even though the overloading speaks of double, not a specifically numpy name:
inline vector operator*( const double s) const throw() { return vector( s*x, s*y, s*z); }
VPython uses Boost, and the glue concerning vectors includes the following:
py::class_<vector>("vector", py::init< py::optional<double, double, double> >()) .def( self * double()) .def( double() * self)
As far as I can understand from the Boost Python documentation, this is the proper way to specify the lefthand and righthand overloadings. But do I have to add something like .def( npy_float64() * self)? Help would be much appreciated.
Bruce Sherwood
Sorry this isn't an answer, just noise, but for those here who don't know, Bruce is the chief maintainer of the vpython project. I have found vpython aka the visual module to be a highly attractive and useful module for teaching physics. It would be great if someone with Boost experience would try to help him out. I wouldn't want him to get falsely disillusioned with this list as I for one have been looking forward to a fully numpycompatible version of vpython.
Gary R.
Bruce Sherwood wrote:
Sorry to repeat myself and be insistent, but could someone please at least comment on whether I'm doing anything obviously wrong, even if you don't immediately have a solution to my serious problem? There was no response to my question (see copy below) which I sent to both the numpy and Boost mailing lists.
To the numpy experts: Is there something wrong, or something I could/should change in how I'm trying to overload multiplication of a numpy square root (or other numpy function) times my own "vector" object? I'm seeing a huge performance hit in going from Numeric to numpy because Numeric sqrt returned float whereas numpy sqrt returns numpy.float64, so that the result is not one of my vector objects. I don't have a problem with myvector*sqrt(5.5).
Desperately,
Bruce Sherwood
On Dec 26, 2007 3:49 AM, Gary Ruben gruben@bigpond.net.au wrote:
Sorry this isn't an answer, just noise, but for those here who don't know, Bruce is the chief maintainer of the vpython project. I have found vpython aka the visual module to be a highly attractive and useful module for teaching physics. It would be great if someone with Boost experience would try to help him out. I wouldn't want him to get falsely disillusioned with this list as I for one have been looking forward to a fully numpycompatible version of vpython.
I think the problem is that few of us are that familiar with boost/python. I have used it myself, but only for interfacing C++ classes or accessing Numpy arrays through their buffer interface. I always avoided the Numeric machinery because it looked clumsy and inefficient to me, I didn't want an embedded version of Numeric (Numarray, Numpy), I wanted speed. Anyway, I think numpy.float64 hides a normal C double, so the slowdown probably comes from the boost/python machinery. I don't think boost/python was ever updated to use Numpy in particular and the Numpy data type may be throwing it through an interpretive loop. The speed of double vs float arithmetic on Intel hardware is pretty much a wash, so should not show much difference otherwise. You can get a float32 result from sqrt
In [2]: type(sqrt(float32(5.5))) Out[2]: <type 'numpy.float32'>
But I suspect it will make little difference, probably what is needed is the corresponding C/Python data type. I think that information is available somewhere, but am not sure of the details. Someone else can probably help you there (Travis?)
Chuck
<snip>
Gary Ruben wrote:
Sorry this isn't an answer, just noise, but for those here who don't know, Bruce is the chief maintainer of the vpython project. I have found vpython aka the visual module to be a highly attractive and useful module for teaching physics. It would be great if someone with Boost experience would try to help him out. I wouldn't want him to get falsely disillusioned with this list as I for one have been looking forward to a fully numpycompatible version of vpython.
Please keep in mind that for many of us, this is the holiday season and we are on vacation. While I'm happy to check the list and give answers that are at the front of my head, deeper answers that require exploration or experimentation are beyond my available time. I'm sure others are in a similar situation.
Bruce Sherwood wrote:
Sorry to repeat myself and be insistent, but could someone please at least comment on whether I'm doing anything obviously wrong, even if you don't immediately have a solution to my serious problem? There was no response to my question (see copy below) which I sent to both the numpy and Boost mailing lists.
To the numpy experts: Is there something wrong, or something I could/should change in how I'm trying to overload multiplication of a numpy square root (or other numpy function) times my own "vector" object? I'm seeing a huge performance hit in going from Numeric to numpy because Numeric sqrt returned float whereas numpy sqrt returns numpy.float64, so that the result is not one of my vector objects. I don't have a problem with myvector*sqrt(5.5).
I'm not sure how to help explicitly as I do not know Boost very well and so am not clear on what specifically was done that is now not working (or where the problem lies).
I can, however, explain the numpy.float64 Python object.
This Python object is binarycompatible with a Python float object (and in fact is a C subtype of it). It is a simple wrapper around a regular Cdouble.
There are lots of possibilities. The most likely issue in my mind is a coercion issue. Whereas the Python float probably bailed when it saw myvector as the other argument, the numpy.float64 is seeing myvector as something that can be converted into a NumPy array and therefore going ahead and doing the conversion and performing the multiplication (which will involve all the overhead for general ufuncs).
Thus, there needs to be a way to signal to numpy.float64 that it should let the other object handle it. I'm not sure what the right solution is here, but I think that is the problem. The good news is that we can change it if we figure out what the right thing to do is.
Best regards,
Travis O.
participants (5)

Bruce Sherwood

Charles R Harris

Gary Ruben

Robert Kern

Travis E. Oliphant