
Hi,
(sorry, no time for full reply, so for now just answering what I believe is the main point)
Thanks for taking the time to discuss/explain this at all... I appreciate it.
The evilness lies in the silent switch between the rollover and upcast behavior, as in the example I gave previously:
In [50]: np.array([2], dtype='int8') + 127 Out[50]: array([-127], dtype=int8) In [51]: np.array([2], dtype='int8') + 128 Out[51]: array([130], dtype=int16)
Right, but for better or for worse this is how *array* addition works. If I have an int16 array in my program, and I add a user-supplied array to it, I get rollover if they supply an int16 array and upcasting if they provide an int32. The answer may simply be that we consider scalar addition a special case; I think that's really what tripping me up here. Granted, one is a type-dependent change while the other is a value-dependent change; but in my head they were connected by the rules for choosing a "effective" dtype for a scalar based on its value.
If the scalar is the user-supplied value, it's likely you actually want a fixed behavior (either rollover or upcast) regardless of the numeric value being provided.
This is a good point; thanks.
Looking at what other numeric libraries are doing is definitely a good suggestion.
I just double-checked IDL, and for addition it seems to convert to the larger type: a = bytarr(10) help, a+fix(0) <Expression> INT = Array[10] help, a+long(0) <Expression> LONG = Array[10] Of course, IDL and Python scalars likely work differently. Andrew