# [Numpy-discussion] Integers to negative integer powers, time for a decision.

Nathaniel Smith njs at pobox.com
Sat Oct 8 17:51:58 EDT 2016

```On Sat, Oct 8, 2016 at 1:40 PM, V. Armando Sole <sole at esrf.fr> wrote:
> Well, testing under windows 64 bit, Python 3.5.2, positive powers of
> integers give integers and negative powers of integers give floats. So, do
> you want to raise an exception when taking a negative power of an element of
> an array of integers? Because not doing so would be inconsistent with
> raising the exception when applying the same operation to the array.
>
> Clearly things are broken now (I get zeros when calculating negative powers
> of numpy arrays of integers others than 1), but that behavior was consistent
> with python itself under python 2.x because the division of two integers was
> an integer. That does not hold under Python 3.5 where the division of two
> integers is a float.

Even on Python 2, negative powers gave floats:

>>> sys.version_info
sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0)
>>> 2 ** -2
0.25

> You have offered either to raise an exception or to always return a float
> (i.e. even with positive exponents). You have never offered to be consistent
> with what Python does. This last option would be my favorite. If it cannot
> be implemented, then I would prefer always float. At least one would be
> consistent with something and we would not invent yet another convention.

Numpy tries to be consistent with Python when it makes sense, but this
is only one of several considerations. The use cases for numpy objects
are different from the use cases for Python scalar objects, so we also
consistently deviate in cases when that makes sense -- e.g., numpy
bools are very different from Python bools (Python barely
distinguishes between bools and integers, because they don't need to;
indexing makes the distinction much more important to numpy), numpy
integers are very different from Python integers (Python's
arbitrary-width integers provide great semantics, but don't play
nicely with large fixed-size arrays), numpy pays much more attention
to type consistency between inputs and outputs than Python does (again
because of the extra constraints imposed by working with
memory-intensive type-consistent arrays), etc.

For python, 2 ** 2 -> int, 2 ** -2 -> float. But numpy can't do this,
because then 2 ** np.array([2, -2]) would have to be both int *and*
float, which it can't be. Not a problem that Python has. Or we could
say that the output is int if all the inputs are positive, and float
if any of them are negative... but then that violates the numpy
principle that output dtypes should be determined entirely by input
dtypes, without peeking at the actual values. (And this rule is very
important for avoiding nasty surprises when you run your code on new
inputs.)

And then there's backwards compatibility to consider. As mentioned, we
*could* deviate from Python by making ** always return float... but
this would almost certainly break tons and tons of people's code that
is currently doing integer ** positive integer and expecting to get an
integer back. Which is something we don't do without very careful
weighing of the trade-offs, and my intuition is that this one is so
disruptive we probably can't pull it off. Breaking working code needs
a *very* compelling reason.

-n

--
Nathaniel J. Smith -- https://vorpus.org

```