![](https://secure.gravatar.com/avatar/b4f6d4f8b501cb05fd054944a166a121.jpg?s=120&d=mm&r=g)
On Di, 2015-09-29 at 11:16 -0700, Nathaniel Smith wrote:
On Sep 29, 2015 8:25 AM, "Anne Archibald" <archibald@astron.nl> wrote:
IEEE 754 has signum(NaN)->NaN. So does np.sign on floating-point
arrays. Why should it be different for object arrays?
The argument for doing it this way would be that arbitrary python objects don't have a sign, and the natural way to implement something like np.sign's semantics using only the "object" interface is
if obj < 0: return -1 elif obj > 0: return 1 elif obj == 0: return 0 else: raise
In general I'm not a big fan of trying to do all kinds of guessing about how to handle random objects in object arrays, the kind that ends up with a big chain of type checks and fallback behaviors. Pretty soon we find ourselves trying to extend the language with our own generic dispatch system for arbitrary python types, just for object arrays. (The current hack where for object arrays np.log will try calling obj.log() is particularly horrible. There is no rule in python that "log" is a reserved method name for "logarithm" on arbitrary objects. Ditto for the other ufuncs that implement this hack.)
Plus we hope that many use cases for object arrays will soon be supplanted by better dtype support, so now may not be the best time to invest heavily in making object arrays complicated and powerful.
I have the little dream here that what could happen is that we create a PyFloatDtype kind of thing (it is a bit different from our float because it would always convert back to a python float and maybe raises more errors), which "registers" with the dtype system in that it says "I know how to handle python floats and store them in an array and provide ufunc implementations for it". Then, the "object" dtype ufuncs would try to call the ufunc on each element, including "conversion". They would find a "float", since it is not an array-like container, they interpret it as a PyFloatDtype scalar and call the scalars ufunc (the PyFloatDtype scalar would be a python float). Of course likely I am thinking down the wrong road, but if you want e.g. an array of Decimals, you need some way to tell that numpy as a PyDecimalDtype. Now "object" would possibly be just a fallback to mean "figure out what to use for each element". It would be a bit slower, but it would work very generally, because numpy would not impose limits as such. - Sebastian
OTOH sometimes practicality beats purity, and at least object arrays are already kinda cordoned off from the rest of the system, so I don't feel as strongly as if we were talking about core functionality.
...is there a compelling reason to even support np.sign on object arrays? This seems pretty far into the weeds, and that tends to lead to poor intuition and decision making.
-n
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion