[Numpy-discussion] Do we want scalar casting to behave as it does at the moment?

Sebastian Berg sebastian at sipsolutions.net
Tue Jan 8 16:15:38 EST 2013


On Tue, 2013-01-08 at 19:59 +0000, Nathaniel Smith wrote:
> On 8 Jan 2013 17:24, "Andrew Collette" <andrew.collette at gmail.com> wrote:
> >
> > Hi,
> >
> > > I think you are voting strongly for the current casting rules, because
> > > they make it less obvious to the user that scalars are different from
> > > arrays.
> >
> > Maybe this is the source of my confusion... why should scalars be
> > different from arrays?  They should follow the same rules, as closely
> > as possible.  If a scalar value would fit in an int16, why not add it
> > using the rules for an int16 array?
> 
> The problem is that rule for arrays - and for every other party of
> numpy in general - are that we *don't* pick types based on values.
> Numpy always uses input types to determine output types, not input
> values.
> 
> # This value fits in an int8
> In [5]: a = np.array([1])
> 
> # And yet...
> In [6]: a.dtype
> Out[6]: dtype('int64')
> 
> In [7]: small = np.array([1], dtype=np.int8)
> 
> # Computing 1 + 1 doesn't need a large integer... but we use one
> In [8]: (small + a).dtype
> Out[8]: dtype('int64')
> 
> Python scalars have an unambiguous types: a Python 'int' is a C
> 'long', and a Python 'float' is a C 'double'. And these are the types
> that np.array() converts them to. So it's pretty unambiguous that
> "using the same rules for arrays and scalars" would mean, ignore the
> value of the scalar, and in expressions like
>   np.array([1], dtype=np.int8) + 1
> we should always upcast to int32/int64. The problem is that this makes
> working with narrow types very awkward for no real benefit, so
> everyone pretty much seems to want *some* kind of special case. These
> are both absolutely special cases:
> 
> numarray through 1.5: in a binary operation, if one operand has
> ndim==0 and the other has ndim>0, ignore the width of the ndim==0
> operand.
> 
> 1.6, your proposal: in a binary operation, if one operand has ndim==0
> and the other has ndim>0, downcast the ndim==0 item to the smallest
> width that is consistent with its value and the other operand's type.
> 

Well, that leaves the maybe not quite implausible proposal of saying
that numpy scalars behave like arrays with ndim>0, but python scalars
behave like they do in 1.6. to allow for easier working with narrow
types.

Sebastian

> -n
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> 





More information about the NumPy-Discussion mailing list