Bruce Sherwood wrote:
Okay, I've implemented the scheme below that was proposed by Scott Daniels on the VPython mailing list, and it solves my problem. It's also much faster than using numpy directly: even with the "def "and "if" overhead: sqrt(scalar) is over 3 times faster than the numpy sqrt, and sqrt(array) is very nearly as fast as the numpy sqrt.
Using math.sqrt short-circuits the ufunc approach of returning numpy scalars (with all the methods and attributes of 0-d arrays --- which is really their primary reason for being). It is also faster because it avoids the numpy ufunc machinery (which is some overhead --- the error setting control and broadcasting facility doesn't happen for free).
Thanks to those who made suggestions. There remains the question of why operator overloading of the kind I've described worked with Numeric and Boost but not with numpy and Boost. Basically, it boils down to the fact that I took a shortcut with implementing generic multiplication (which all the scalars use for now) on numpy scalars so that the multiplication ufunc is called when they are encountered.
Thus, when the numpy.float64 is first (its multiply implementation gets called first) and it uses the equivalent of ufunc.multiply(scalar, vector) I suspect that because your vector can be converted to an array, this procedure works (albeit more slowly than you would like), and so the vector object never gets a chance to try. A quick fix is to make it so that ufunc.multiply(scalar, vector) raises NotImplemented which may not be desireable, either. Alternatively, the generic scalar operations should probably not be so "inclusive" and should allow the other object a chance to perform the operation more often (by returning NotImplemented). -Travis O.