[Numpy-discussion] Scalar casting rules use-case reprise

Matthew Brett matthew.brett at gmail.com
Fri Jan 4 06:09:23 EST 2013


Hi,

Reading the discussion on the scalar casting rule change I realized I
was hazy on the use-cases that led to the rule that scalars cast
differently from arrays.

My impression was that the primary use-case was for lower-precision
floats. That is, when you have a large float32 arr, you do not want to
double your memory use with:

>>> large_float32 + 1.0 # please no float64 here

Probably also:

>>> large_int8 + 1 # please no int32 / int64 here.

That makes sense.  On the other hand these are more ambiguous:

>>> large_float32 + np.float64(1) # really - you don't want float64?

>>> large_int8 + np.int32(1) # ditto

I wonder whether the main use-case was to deal with the automatic
types of Python floats and scalars?  That is, I wonder whether it
would be worth considering (in the distant long term), doing fancy
guess-what-you-mean stuff with Python scalars, on the basis that they
are of unspecified dtype, and make 0 dimensional scalars follow the
array casting rules.  As in:

>>> large_float32 + 1.0
# no upcast - we don't know what float type you meant for the scalar
>>> large_float32 + np.float64(1)
# upcast - you clearly meant the scalar to be float64

In any case, can anyone remember the original use-cases well enough to
record them for future decision making?

Best,

Matthew



More information about the NumPy-Discussion mailing list