[Numpy-discussion] Integers to integer powers

Charles R Harris charlesr.harris at gmail.com
Fri May 20 15:23:39 EDT 2016


On Fri, May 20, 2016 at 1:15 PM, Nathaniel Smith <njs at pobox.com> wrote:

> On Fri, May 20, 2016 at 11:35 AM, Charles R Harris
> <charlesr.harris at gmail.com> wrote:
> >
> >
> > On Thu, May 19, 2016 at 9:30 PM, Nathaniel Smith <njs at pobox.com> wrote:
> >>
> >> So I guess what makes this tricky is that:
> >>
> >> - We want the behavior to the same for multiple-element arrays,
> >> single-element arrays, zero-dimensional arrays, and scalars -- the
> >> shape of the data shouldn't affect the semantics of **
> >>
> >> - We also want the numpy scalar behavior to match the Python scalar
> >> behavior
> >>
> >> - For Python scalars, int ** (positive int) returns an int, but int **
> >> (negative int) returns a float.
> >>
> >> - For arrays, int ** (positive int) and int ** (negative int) _have_
> >> to return the same type, because in general output types are always a
> >> function of the input types and *can't* look at the specific values
> >> involved, and in specific because if you do array([2, 3]) ** array([2,
> >> -2]) you can't return an array where the first element is int and the
> >> second is float.
> >>
> >> Given these immutable and contradictory constraints, the last bad
> >> option IMHO would be that we make int ** (negative int) an error in
> >> all cases, and the error message can suggest that instead of writing
> >>
> >>     np.array(2) ** -2
> >>
> >> they should instead write
> >>
> >>     np.array(2) ** -2.0
> >>
> >> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
> >>
> >> Definitely annoying, but all the other options seem even more
> >> inconsistent and confusing, and likely to encourage the writing of
> >> subtly buggy code...
> >>
> >> (I especially have in mind numpy's habit of silently switching between
> >> scalars and zero-dimensional arrays -- so it's easy to write code that
> >> you think handles arbitrary array dimensions, and it even passes all
> >> your tests, but then it fails when someone passes in a different shape
> >> data and triggers some scalar/array inconsistency. E.g. if we make **
> >> -2 work for scalars but not arrays, then this code:
> >>
> >> def f(arr):
> >>     return np.sum(arr, axis=0) ** -2
> >>
> >> works as expected for 1-d input, tests pass, everyone's happy... but
> >> as soon as you try to pass in higher dimensional integer input it will
> >> fail.)
> >>
> >
> > Hmm, the Alexandrian solution. The main difficulty with this solution
> that
> > this will likely to break working code. We could try it, or take the safe
> > route of raising a (Visible)DeprecationWarning.
>
> Right, sorry, I was talking about the end goal -- there's a separate
> question of how we get there. Pretty much any solution is going to
> require some sort of deprecation cycle though I guess, and at least
> the deprecate -> error transition is a lot easier than the working ->
> working different transition.
>
> > The other option is to
> > simply treat the negative power case uniformly as floor division and
> raise
> > an error on zero division, but the difference from Python power would be
> > highly confusing. I think I would vote for the second option with a
> > DeprecationWarning.
>
> So "floor division" here would mean that k ** -n == 0 for all k and n
> except for k == 1, right? In addition to the consistency issue, that
> doesn't seem like a behavior that's very useful to anyone...
>

And -1 as well. The virtue is consistancy while deprecating. Or we could
just back out the current changes in master and throw in deprecation
warnings. That has the virtue of simplicity and not introducing possible
code breaks.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20160520/442e19ff/attachment.html>


More information about the NumPy-Discussion mailing list