![](https://secure.gravatar.com/avatar/97c543aca1ac7bbcfb5279d0300c8330.jpg?s=120&d=mm&r=g)
On Sep 29, 2015 8:25 AM, "Anne Archibald" <archibald@astron.nl> wrote:
IEEE 754 has signum(NaN)->NaN. So does np.sign on floating-point arrays.
Why should it be different for object arrays? The argument for doing it this way would be that arbitrary python objects don't have a sign, and the natural way to implement something like np.sign's semantics using only the "object" interface is if obj < 0: return -1 elif obj > 0: return 1 elif obj == 0: return 0 else: raise In general I'm not a big fan of trying to do all kinds of guessing about how to handle random objects in object arrays, the kind that ends up with a big chain of type checks and fallback behaviors. Pretty soon we find ourselves trying to extend the language with our own generic dispatch system for arbitrary python types, just for object arrays. (The current hack where for object arrays np.log will try calling obj.log() is particularly horrible. There is no rule in python that "log" is a reserved method name for "logarithm" on arbitrary objects. Ditto for the other ufuncs that implement this hack.) Plus we hope that many use cases for object arrays will soon be supplanted by better dtype support, so now may not be the best time to invest heavily in making object arrays complicated and powerful. OTOH sometimes practicality beats purity, and at least object arrays are already kinda cordoned off from the rest of the system, so I don't feel as strongly as if we were talking about core functionality. ...is there a compelling reason to even support np.sign on object arrays? This seems pretty far into the weeds, and that tends to lead to poor intuition and decision making. -n