On 3 Jan 2013 23:39, "Andrew Collette" <andrew.collette@gmail.com> wrote:
>
> > Consensus in that bug report seems to be that for array/scalar operations like:
> >   np.array([1], dtype=np.int8) + 1000 # can't be represented as an int8!
> > we should raise an error, rather than either silently upcasting the
> > result (as in 1.6 and 1.7) or silently downcasting the scalar (as in
> > 1.5 and earlier).
>
> I have run into this a few times as a NumPy user, and I just wanted to
> comment that (in my opinion), having this case generate an error is
> the worst of both worlds.  The reason people can't decide between
> rollover and promotion is because neither is objectively better.  One
> avoids memory inflation, and the other avoids losing precision.  You
> just need to pick one and document it.  Kicking the can down the road
> to the user, and making him/her explicitly test for this condition, is
> not a very good solution.
>
> What does this mean in practical terms for NumPy users?  I personally
> don't relish the choice of always using numpy.add, or always wrapping
> my additions in checks for ValueError.

To be clear: we're only talking here about the case where you have a mix of a narrow dtype in an array and a scalar value that cannot be represented in that narrow dtype. If both sides are arrays then we continue to upcast as normal. So my impression is that this means very little in practical terms, because this is a rare and historically poorly supported situation.

But if this is something you're running into in practice then you may have a better idea than us about the practical effects. Do you have any examples where this has come up that you can share?

-n