strange exponentiation performance

Jeff Davis nospam at nospam.com
Sun Nov 23 06:31:37 EST 2003


I was doing some thinking about exponentiation algorithms along with a
friend, and I happened upon some interesting results. Particularly, I was
able to outperform the ** operator in at least one case, with a recursive
algorithm. This leads me to believe that perhaps the ** operator should
tune it's algorithm based on inputs or some such thing. Here is my data:


>>> def h(b,e):
...     if(e==0): return 1
...     if(e==1): return b
...     t = h(b,e >> 1)
...     return t*t*h(b,e & 1)
...

Above is the recursive exponentiation algorithm. I tried some test data
and it appears to work. This just popped into my head out of nowhere and I
optimized it with some trivial optimizations (I used e>>1 instead of e/2;
I used e&1 instead of e%2).

>>> def f(b,e):
...     n = 1
...     while(e>0):
...             if(e & 1): n = n * b
...             e >>= 1
...             b *= b
...     return n
...

I then made this algorithm which I thought basically unwrapped the
recursion in h(). It seems to work also.

Then, the more trivial exponentiation algorithm:

>>> def g(b,e):
...     n = 1
...     while(e>0):
...             n *= b
...             e -= 1
...     return n
...

For consistency, I wrapped ** in a function call:

>>> def o(b,e):
...     return b**e
...

then I made a test function to time the computation time:

>>> def test(func,b,e):
...     t1 = time.time()
...     x = func(b,e)
...     t2 = time.time()
...     print t2-t1
...


now, I compared:

>>> test(f,19,100000)
0.765542984009
>>> test(g,19,100000)
11.4481400251
>>> test(h,19,100000)
0.370787024498
>>> test(o,19,100000)
0.457986950874

now, g() was blown out of the water, as expected, but the others were
close enough for another test at a higher "e" value.

>>> test(f,19,500000)
8.02366995811
>>> test(h,19,500000)
3.66968798637
>>> test(o,19,500000)
5.29517292976
>>>

Now, that is the interesting part. How did ** not measure up to h()? It's
also interesting that f(), which is supposed to be a more efficient
version of h(), is lagging behind.

I would like help explaining the following:
(1) How did my less-than-perfectly-optimized recursive algorithm win
against the ** operator?
(2) How can I unwrap and optimize h()? From what I understand, recursion
is never supposed to be the most efficient. I suspect there are some
hidden inefficiencies in using while(), but I'd like to know the specifics.

If my algorithm h() is better, why can't ** use a quick test to change
algorithms based on inputs? Or is mine better in all cases?

BTW: python2.3.2 compiled with gcc 3.3.2 on linux2.4.19 all on debian &
i386. I have an AMD athlon xp 1800+.

I ran all test cases several times and results were very consistant.

Also note that I'm not exactly an algorithms expert, I just happened upon
these results while chatting with a friend.

Regards,
	Jeff Davis











More information about the Python-list mailing list