# "pow" (power) function

Ben Cartwright bencvt at gmail.com
Thu Mar 16 03:46:22 CET 2006

```Russ wrote:
> Ben Cartwright wrote:
> > Russ wrote:
>
> > > Does "pow(x,2)" simply square x, or does it first compute logarithms
> > > (as would be necessary if the exponent were not an integer)?
> >
> >
> > The former, using binary exponentiation (quite fast), assuming x is an
> > int or long.
> >
> > If x is a float, Python coerces the 2 to 2.0, and CPython's float_pow()
> > function is called.  This function calls libm's pow(), which in turn
> > uses logarithms.
>
> I just did a little time test (which I should have done *before* my
> original post!), and 2.0**2 seems to be about twice as fast as
> pow(2.0,2). That seems consistent with your claim above.

Actually, the fact that x**y is faster than pow(x, y) has nothing do to
with the int vs. float issue.  It's actually due to do the way Python
parses operators versus builtin functions.  Paul Rubin hit the nail on
the head when he suggested you check the bytecode:

>>> import dis
>>> dis.dis(lambda x, y: x**y)
6 BINARY_POWER
7 RETURN_VALUE
>>> dis.dis(lambda x, y: pow(x,y))
9 CALL_FUNCTION            2
12 RETURN_VALUE

especially when you're doing it a million times (which, coincidentally,
timeit does).

Anyway, if you want to see the int vs. float issue in action, try this:

>>> from timeit import Timer
>>> Timer('2**2').timeit()
0.12681011582321844
>>> Timer('2.0**2.0').timeit()
0.33336011743438121
>>> Timer('2.0**2').timeit()
0.36681835556112219
>>> Timer('2**2.0').timeit()
0.37949818370600497

As you can see, the int version is much faster than the float version.
The last two cases, which also use the float version, have an
additional performance hit due to type coercion.  The relative speed
differences are similar when using pow():

>>> Timer('pow(2, 2)').timeit()
0.33000968869157532
>>> Timer('pow(2.0, 2.0)').timeit()
0.50356362184709269
>>> Timer('pow(2.0, 2)').timeit()
0.55112938185857274
>>> Timer('pow(2, 2.0)').timeit()
0.55198819605811877

> I'm a bit surprised that pow() would use logarithms even if the
> exponent is an integer. I suppose that just checking for an integer
> exponent could blow away the gain that would be achieved by avoiding
> logarithms.  On the other hand, I would think that using logarithms
> could introduce a tiny error (e.g., pow(2.0,2) = 3.9999999996 <- made
> up result) that wouldn't occur with multiplication.

These are good questions to ask an expert in floating point arithmetic.
Which I'm not. :-)

> > > Does "x**0.5" use the same algorithm as "sqrt(x)", or does it use some
> > > other (perhaps less efficient) algorithm based on logarithms?
> >
> > The latter, and that algorithm is libm's pow().  Except for a few
> > special cases that Python handles, all floating point exponentation is
> > left to libm.  Checking to see if the exponent is 0.5 is not one of
> > those special cases.
>
> I just did another little time test comparing 2.0**0.5 with sqrt(2.0).
> Surprisingly, 2.0**0.5 seems to take around a third less time.

Again, this is because of the operator vs. function lookup issue.
pow(2.0, 0.5) vs. sqrt(2.0) is a better comparison:

>>> from timeit import Timer
>>> Timer('pow(2.0, 0.5)').timeit()
0.51701437102815362
>>> Timer('sqrt(2.0)', 'from math import sqrt').timeit()
0.46649096722239847

> None of these differences are really significant unless one is doing
> super-heavy-duty number crunching, of course, but I was just curious.
> Thanks for the information.

Welcome. :-)

--Ben

```